Test Report: QEMU_macOS 19423

                    
                      7f7446252791c927139509879c70af875912dc64:2024-08-18:35842
                    
                

Test fail (94/270)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.06
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.95
46 TestCertOptions 10.23
47 TestCertExpiration 195.42
48 TestDockerFlags 10.23
49 TestForceSystemdFlag 10.37
50 TestForceSystemdEnv 10.87
95 TestFunctional/parallel/ServiceCmdConnect 30.58
167 TestMultiControlPlane/serial/StopSecondaryNode 312.31
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.13
169 TestMultiControlPlane/serial/RestartSecondaryNode 305.27
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.56
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 226.79
177 TestImageBuild/serial/Setup 10.24
180 TestJSONOutput/start/Command 9.78
186 TestJSONOutput/pause/Command 0.08
192 TestJSONOutput/unpause/Command 0.04
209 TestMinikubeProfile 10.2
212 TestMountStart/serial/StartWithMountFirst 9.94
215 TestMultiNode/serial/FreshStart2Nodes 10
216 TestMultiNode/serial/DeployApp2Nodes 100.44
217 TestMultiNode/serial/PingHostFrom2Pods 0.09
218 TestMultiNode/serial/AddNode 0.07
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.08
221 TestMultiNode/serial/CopyFile 0.06
222 TestMultiNode/serial/StopNode 0.13
223 TestMultiNode/serial/StartAfterStop 51.67
224 TestMultiNode/serial/RestartKeepsNodes 8.92
225 TestMultiNode/serial/DeleteNode 0.1
226 TestMultiNode/serial/StopMultiNode 3.12
227 TestMultiNode/serial/RestartMultiNode 5.25
228 TestMultiNode/serial/ValidateNameConflict 20.25
232 TestPreload 10.1
234 TestScheduledStopUnix 10.16
235 TestSkaffold 12.6
238 TestRunningBinaryUpgrade 594.26
240 TestKubernetesUpgrade 19.02
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.41
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.03
256 TestStoppedBinaryUpgrade/Upgrade 573.08
258 TestPause/serial/Start 9.85
268 TestNoKubernetes/serial/StartWithK8s 9.91
269 TestNoKubernetes/serial/StartWithStopK8s 5.33
270 TestNoKubernetes/serial/Start 5.32
274 TestNoKubernetes/serial/StartNoArgs 5.31
276 TestNetworkPlugins/group/auto/Start 9.91
277 TestNetworkPlugins/group/kindnet/Start 9.78
278 TestNetworkPlugins/group/calico/Start 9.82
279 TestNetworkPlugins/group/custom-flannel/Start 9.9
280 TestNetworkPlugins/group/false/Start 9.93
281 TestNetworkPlugins/group/enable-default-cni/Start 9.97
282 TestNetworkPlugins/group/flannel/Start 9.77
283 TestNetworkPlugins/group/bridge/Start 9.92
284 TestNetworkPlugins/group/kubenet/Start 10.09
287 TestStartStop/group/old-k8s-version/serial/FirstStart 10.14
288 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
292 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
296 TestStartStop/group/old-k8s-version/serial/Pause 0.1
298 TestStartStop/group/no-preload/serial/FirstStart 10.09
299 TestStartStop/group/no-preload/serial/DeployApp 0.09
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
303 TestStartStop/group/no-preload/serial/SecondStart 5.26
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.91
306 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/no-preload/serial/Pause 0.1
311 TestStartStop/group/newest-cni/serial/FirstStart 10.13
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.65
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
327 TestStartStop/group/embed-certs/serial/FirstStart 9.96
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
331 TestStartStop/group/newest-cni/serial/Pause 0.1
332 TestStartStop/group/embed-certs/serial/DeployApp 0.09
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
336 TestStartStop/group/embed-certs/serial/SecondStart 5.25
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/embed-certs/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (17.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-039000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-039000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.061371208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"251f7c33-290e-4ea5-9a5c-b557dd8d69c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-039000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1dd94607-55b5-4c34-aa60-21812ca69af0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"45f6ee48-8086-417e-9203-17c2d2d3cfd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig"}}
	{"specversion":"1.0","id":"06d556a9-47f5-44c2-a79b-ed1b2221825d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ef520ce1-11d0-4914-83ac-8f6496fe8f10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"703ad88c-9989-44df-b0e8-7b7cfb70e892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube"}}
	{"specversion":"1.0","id":"608d5889-f73e-4f4c-95a8-466591a358cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"40ec734c-6131-43fc-bc09-f7df210524e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc77c441-8609-4c6b-93ef-ee0df581cade","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c1ad5c14-05a9-4306-9a90-d8d9b6cbf563","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"78619b63-d1e2-4005-b5a0-332d2035b646","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-039000\" primary control-plane node in \"download-only-039000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"052f14c1-5efc-4caa-a586-1b8fdbabac2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ded2b1e5-08bc-41f0-a9bb-f8d1351ec187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0] Decompressors:map[bz2:0x14000590d70 gz:0x14000590d78 tar:0x14000590cf0 tar.bz2:0x14000590d00 tar.gz:0x14000590d40 tar.xz:0x14000590d50 tar.zst:0x14000590d60 tbz2:0x14000590d00 tgz:0x140
00590d40 txz:0x14000590d50 tzst:0x14000590d60 xz:0x14000590d80 zip:0x14000590dc0 zst:0x14000590d88] Getters:map[file:0x14000201d40 http:0x140001221e0 https:0x14000122230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"1daa7b42-b34b-4058-9695-b175c24a56aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 11:37:16.360729    1461 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:37:16.360890    1461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:16.360893    1461 out.go:358] Setting ErrFile to fd 2...
	I0818 11:37:16.360895    1461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:16.361013    1461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	W0818 11:37:16.361109    1461 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-984/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-984/.minikube/config/config.json: no such file or directory
	I0818 11:37:16.362456    1461 out.go:352] Setting JSON to true
	I0818 11:37:16.381073    1461 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":406,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 11:37:16.381134    1461 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:37:16.386593    1461 out.go:97] [download-only-039000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 11:37:16.386701    1461 notify.go:220] Checking for updates...
	W0818 11:37:16.386729    1461 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball: no such file or directory
	I0818 11:37:16.390648    1461 out.go:169] MINIKUBE_LOCATION=19423
	I0818 11:37:16.392358    1461 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 11:37:16.395732    1461 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 11:37:16.398738    1461 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:37:16.400287    1461 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	W0818 11:37:16.407580    1461 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 11:37:16.407777    1461 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:37:16.410699    1461 out.go:97] Using the qemu2 driver based on user configuration
	I0818 11:37:16.410716    1461 start.go:297] selected driver: qemu2
	I0818 11:37:16.410729    1461 start.go:901] validating driver "qemu2" against <nil>
	I0818 11:37:16.410806    1461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 11:37:16.414572    1461 out.go:169] Automatically selected the socket_vmnet network
	I0818 11:37:16.420499    1461 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0818 11:37:16.420709    1461 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 11:37:16.420797    1461 cni.go:84] Creating CNI manager for ""
	I0818 11:37:16.420815    1461 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0818 11:37:16.420871    1461 start.go:340] cluster config:
	{Name:download-only-039000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-039000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:37:16.426831    1461 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:37:16.430542    1461 out.go:97] Downloading VM boot image ...
	I0818 11:37:16.430569    1461 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0818 11:37:26.571543    1461 out.go:97] Starting "download-only-039000" primary control-plane node in "download-only-039000" cluster
	I0818 11:37:26.571566    1461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:26.633804    1461 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0818 11:37:26.633830    1461 cache.go:56] Caching tarball of preloaded images
	I0818 11:37:26.634014    1461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:26.638391    1461 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0818 11:37:26.638398    1461 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:26.727060    1461 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0818 11:37:32.085530    1461 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:32.085694    1461 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:32.780380    1461 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0818 11:37:32.780563    1461 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/download-only-039000/config.json ...
	I0818 11:37:32.780579    1461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/download-only-039000/config.json: {Name:mk9442a1cb9f1b069c8e1d28f86c1f8bb56f7572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:37:32.780823    1461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:32.781073    1461 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0818 11:37:33.348962    1461 out.go:193] 
	W0818 11:37:33.355999    1461 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0] Decompressors:map[bz2:0x14000590d70 gz:0x14000590d78 tar:0x14000590cf0 tar.bz2:0x14000590d00 tar.gz:0x14000590d40 tar.xz:0x14000590d50 tar.zst:0x14000590d60 tbz2:0x14000590d00 tgz:0x14000590d40 txz:0x14000590d50 tzst:0x14000590d60 xz:0x14000590d80 zip:0x14000590dc0 zst:0x14000590d88] Getters:map[file:0x14000201d40 http:0x140001221e0 https:0x14000122230] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0818 11:37:33.356029    1461 out_reason.go:110] 
	W0818 11:37:33.363896    1461 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 11:37:33.366864    1461 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-039000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (17.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-629000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-629000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.795999375s)

                                                
                                                
-- stdout --
	* [offline-docker-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-629000" primary control-plane node in "offline-docker-629000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-629000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:23:35.532094    3438 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:23:35.532236    3438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:35.532240    3438 out.go:358] Setting ErrFile to fd 2...
	I0818 12:23:35.532242    3438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:35.532383    3438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:23:35.533437    3438 out.go:352] Setting JSON to false
	I0818 12:23:35.551148    3438 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3185,"bootTime":1724005830,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:23:35.551240    3438 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:23:35.556705    3438 out.go:177] * [offline-docker-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:23:35.564538    3438 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:23:35.564550    3438 notify.go:220] Checking for updates...
	I0818 12:23:35.571636    3438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:23:35.574565    3438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:23:35.577537    3438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:23:35.580576    3438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:23:35.583555    3438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:23:35.586934    3438 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:35.586996    3438 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:23:35.590535    3438 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:23:35.598528    3438 start.go:297] selected driver: qemu2
	I0818 12:23:35.598538    3438 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:23:35.598546    3438 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:23:35.600448    3438 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:23:35.603617    3438 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:23:35.606570    3438 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:23:35.606587    3438 cni.go:84] Creating CNI manager for ""
	I0818 12:23:35.606597    3438 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:23:35.606601    3438 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:23:35.606630    3438 start.go:340] cluster config:
	{Name:offline-docker-629000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:23:35.610111    3438 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:35.617544    3438 out.go:177] * Starting "offline-docker-629000" primary control-plane node in "offline-docker-629000" cluster
	I0818 12:23:35.621526    3438 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:23:35.621554    3438 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:23:35.621563    3438 cache.go:56] Caching tarball of preloaded images
	I0818 12:23:35.621633    3438 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:23:35.621638    3438 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:23:35.621702    3438 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/offline-docker-629000/config.json ...
	I0818 12:23:35.621712    3438 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/offline-docker-629000/config.json: {Name:mka340b33a927117afa5e6a360c8a4b43d3abf81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:23:35.621989    3438 start.go:360] acquireMachinesLock for offline-docker-629000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:35.622024    3438 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "offline-docker-629000"
	I0818 12:23:35.622035    3438 start.go:93] Provisioning new machine with config: &{Name:offline-docker-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:35.622061    3438 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:35.626542    3438 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:23:35.642557    3438 start.go:159] libmachine.API.Create for "offline-docker-629000" (driver="qemu2")
	I0818 12:23:35.642594    3438 client.go:168] LocalClient.Create starting
	I0818 12:23:35.642684    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:35.642715    3438 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:35.642731    3438 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:35.642772    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:35.642795    3438 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:35.642810    3438 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:35.643188    3438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:35.799187    3438 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:35.833336    3438 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:35.833345    3438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:35.833539    3438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2
	I0818 12:23:35.854186    3438 main.go:141] libmachine: STDOUT: 
	I0818 12:23:35.854205    3438 main.go:141] libmachine: STDERR: 
	I0818 12:23:35.854275    3438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2 +20000M
	I0818 12:23:35.862997    3438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:35.863015    3438 main.go:141] libmachine: STDERR: 
	I0818 12:23:35.863045    3438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2
	I0818 12:23:35.863048    3438 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:35.863062    3438 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:35.863092    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:93:80:81:98:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2
	I0818 12:23:35.864807    3438 main.go:141] libmachine: STDOUT: 
	I0818 12:23:35.864823    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:35.864838    3438 client.go:171] duration metric: took 222.239542ms to LocalClient.Create
	I0818 12:23:37.866915    3438 start.go:128] duration metric: took 2.244862792s to createHost
	I0818 12:23:37.866944    3438 start.go:83] releasing machines lock for "offline-docker-629000", held for 2.244935125s
	W0818 12:23:37.866975    3438 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:37.877418    3438 out.go:177] * Deleting "offline-docker-629000" in qemu2 ...
	W0818 12:23:37.888533    3438 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:37.888544    3438 start.go:729] Will try again in 5 seconds ...
	I0818 12:23:42.890718    3438 start.go:360] acquireMachinesLock for offline-docker-629000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:42.891098    3438 start.go:364] duration metric: took 294.75µs to acquireMachinesLock for "offline-docker-629000"
	I0818 12:23:42.891220    3438 start.go:93] Provisioning new machine with config: &{Name:offline-docker-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:42.891456    3438 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:42.900975    3438 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:23:42.947921    3438 start.go:159] libmachine.API.Create for "offline-docker-629000" (driver="qemu2")
	I0818 12:23:42.947967    3438 client.go:168] LocalClient.Create starting
	I0818 12:23:42.948106    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:42.948172    3438 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:42.948189    3438 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:42.948260    3438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:42.948306    3438 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:42.948323    3438 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:42.948920    3438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:43.113843    3438 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:43.233867    3438 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:43.233873    3438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:43.234044    3438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2
	I0818 12:23:43.243508    3438 main.go:141] libmachine: STDOUT: 
	I0818 12:23:43.243528    3438 main.go:141] libmachine: STDERR: 
	I0818 12:23:43.243583    3438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2 +20000M
	I0818 12:23:43.251458    3438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:43.251477    3438 main.go:141] libmachine: STDERR: 
	I0818 12:23:43.251488    3438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2
	I0818 12:23:43.251492    3438 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:43.251501    3438 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:43.251535    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:42:82:bd:f5:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/offline-docker-629000/disk.qcow2
	I0818 12:23:43.253146    3438 main.go:141] libmachine: STDOUT: 
	I0818 12:23:43.253163    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:43.253175    3438 client.go:171] duration metric: took 305.206458ms to LocalClient.Create
	I0818 12:23:45.255337    3438 start.go:128] duration metric: took 2.363857792s to createHost
	I0818 12:23:45.255382    3438 start.go:83] releasing machines lock for "offline-docker-629000", held for 2.364283667s
	W0818 12:23:45.255769    3438 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:45.269379    3438 out.go:201] 
	W0818 12:23:45.274421    3438 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:23:45.274501    3438 out.go:270] * 
	* 
	W0818 12:23:45.277216    3438 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:23:45.285364    3438 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-629000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-18 12:23:45.302431 -0700 PDT m=+2789.062313792
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-629000 -n offline-docker-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-629000 -n offline-docker-629000: exit status 7 (65.772042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-629000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-629000
--- FAIL: TestOffline (9.95s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.966632208s)

                                                
                                                
-- stdout --
	* [cert-options-287000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-287000" primary control-plane node in "cert-options-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-287000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-287000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-287000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.965875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-287000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-287000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-287000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-287000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.12575ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-287000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-287000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-287000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-18 12:24:16.669393 -0700 PDT m=+2820.429550917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-287000 -n cert-options-287000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-287000 -n cert-options-287000: exit status 7 (30.232458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-287000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-287000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-287000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.040194625s)

                                                
                                                
-- stdout --
	* [cert-expiration-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-172000" primary control-plane node in "cert-expiration-172000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-172000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.224925958s)

                                                
                                                
-- stdout --
	* [cert-expiration-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-172000" primary control-plane node in "cert-expiration-172000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-172000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-172000" primary control-plane node in "cert-expiration-172000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-172000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-172000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-18 12:27:16.709528 -0700 PDT m=+3000.471268876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-172000 -n cert-expiration-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-172000 -n cert-expiration-172000: exit status 7 (66.663959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-172000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-172000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-172000
--- FAIL: TestCertExpiration (195.42s)

                                                
                                    
x
+
TestDockerFlags (10.23s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-876000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-876000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.9945395s)

                                                
                                                
-- stdout --
	* [docker-flags-876000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-876000" primary control-plane node in "docker-flags-876000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-876000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:23:56.350975    3631 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:23:56.351121    3631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:56.351125    3631 out.go:358] Setting ErrFile to fd 2...
	I0818 12:23:56.351127    3631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:56.351256    3631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:23:56.352282    3631 out.go:352] Setting JSON to false
	I0818 12:23:56.368290    3631 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3206,"bootTime":1724005830,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:23:56.368373    3631 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:23:56.373434    3631 out.go:177] * [docker-flags-876000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:23:56.381170    3631 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:23:56.381232    3631 notify.go:220] Checking for updates...
	I0818 12:23:56.388233    3631 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:23:56.391177    3631 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:23:56.394189    3631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:23:56.397216    3631 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:23:56.400175    3631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:23:56.403560    3631 config.go:182] Loaded profile config "force-systemd-flag-574000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:56.403624    3631 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:56.403679    3631 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:23:56.408165    3631 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:23:56.415173    3631 start.go:297] selected driver: qemu2
	I0818 12:23:56.415181    3631 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:23:56.415199    3631 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:23:56.417492    3631 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:23:56.420184    3631 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:23:56.423244    3631 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0818 12:23:56.423265    3631 cni.go:84] Creating CNI manager for ""
	I0818 12:23:56.423286    3631 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:23:56.423292    3631 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:23:56.423325    3631 start.go:340] cluster config:
	{Name:docker-flags-876000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-876000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:23:56.427058    3631 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:56.434187    3631 out.go:177] * Starting "docker-flags-876000" primary control-plane node in "docker-flags-876000" cluster
	I0818 12:23:56.438162    3631 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:23:56.438181    3631 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:23:56.438193    3631 cache.go:56] Caching tarball of preloaded images
	I0818 12:23:56.438276    3631 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:23:56.438283    3631 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:23:56.438348    3631 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/docker-flags-876000/config.json ...
	I0818 12:23:56.438361    3631 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/docker-flags-876000/config.json: {Name:mkeac298d10a04d3af5723b4be03e734250145a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:23:56.438587    3631 start.go:360] acquireMachinesLock for docker-flags-876000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:56.438624    3631 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "docker-flags-876000"
	I0818 12:23:56.438637    3631 start.go:93] Provisioning new machine with config: &{Name:docker-flags-876000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-876000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:56.438668    3631 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:56.447099    3631 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:23:56.465513    3631 start.go:159] libmachine.API.Create for "docker-flags-876000" (driver="qemu2")
	I0818 12:23:56.465541    3631 client.go:168] LocalClient.Create starting
	I0818 12:23:56.465608    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:56.465642    3631 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:56.465655    3631 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:56.465694    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:56.465721    3631 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:56.465728    3631 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:56.466095    3631 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:56.621560    3631 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:56.813502    3631 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:56.813513    3631 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:56.813730    3631 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2
	I0818 12:23:56.823458    3631 main.go:141] libmachine: STDOUT: 
	I0818 12:23:56.823480    3631 main.go:141] libmachine: STDERR: 
	I0818 12:23:56.823531    3631 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2 +20000M
	I0818 12:23:56.831515    3631 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:56.831535    3631 main.go:141] libmachine: STDERR: 
	I0818 12:23:56.831548    3631 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2
	I0818 12:23:56.831552    3631 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:56.831566    3631 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:56.831592    3631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:ec:4a:49:ee:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2
	I0818 12:23:56.833202    3631 main.go:141] libmachine: STDOUT: 
	I0818 12:23:56.833218    3631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:56.833236    3631 client.go:171] duration metric: took 367.692792ms to LocalClient.Create
	I0818 12:23:58.835383    3631 start.go:128] duration metric: took 2.39672175s to createHost
	I0818 12:23:58.835467    3631 start.go:83] releasing machines lock for "docker-flags-876000", held for 2.396853667s
	W0818 12:23:58.835521    3631 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:58.857402    3631 out.go:177] * Deleting "docker-flags-876000" in qemu2 ...
	W0818 12:23:58.877813    3631 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:58.877831    3631 start.go:729] Will try again in 5 seconds ...
	I0818 12:24:03.880027    3631 start.go:360] acquireMachinesLock for docker-flags-876000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:24:03.925229    3631 start.go:364] duration metric: took 45.050208ms to acquireMachinesLock for "docker-flags-876000"
	I0818 12:24:03.925379    3631 start.go:93] Provisioning new machine with config: &{Name:docker-flags-876000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-876000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:24:03.925664    3631 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:24:03.934214    3631 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:24:03.984172    3631 start.go:159] libmachine.API.Create for "docker-flags-876000" (driver="qemu2")
	I0818 12:24:03.984227    3631 client.go:168] LocalClient.Create starting
	I0818 12:24:03.984371    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:24:03.984439    3631 main.go:141] libmachine: Decoding PEM data...
	I0818 12:24:03.984459    3631 main.go:141] libmachine: Parsing certificate...
	I0818 12:24:03.984523    3631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:24:03.984573    3631 main.go:141] libmachine: Decoding PEM data...
	I0818 12:24:03.984586    3631 main.go:141] libmachine: Parsing certificate...
	I0818 12:24:03.985666    3631 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:24:04.167491    3631 main.go:141] libmachine: Creating SSH key...
	I0818 12:24:04.250038    3631 main.go:141] libmachine: Creating Disk image...
	I0818 12:24:04.250048    3631 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:24:04.250217    3631 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2
	I0818 12:24:04.259364    3631 main.go:141] libmachine: STDOUT: 
	I0818 12:24:04.259383    3631 main.go:141] libmachine: STDERR: 
	I0818 12:24:04.259438    3631 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2 +20000M
	I0818 12:24:04.267302    3631 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:24:04.267321    3631 main.go:141] libmachine: STDERR: 
	I0818 12:24:04.267330    3631 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2
	I0818 12:24:04.267336    3631 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:24:04.267351    3631 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:24:04.267386    3631 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:a6:dd:52:d3:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/docker-flags-876000/disk.qcow2
	I0818 12:24:04.268921    3631 main.go:141] libmachine: STDOUT: 
	I0818 12:24:04.268938    3631 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:24:04.268950    3631 client.go:171] duration metric: took 284.719292ms to LocalClient.Create
	I0818 12:24:06.271084    3631 start.go:128] duration metric: took 2.345400084s to createHost
	I0818 12:24:06.271139    3631 start.go:83] releasing machines lock for "docker-flags-876000", held for 2.345888708s
	W0818 12:24:06.271418    3631 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:24:06.288065    3631 out.go:201] 
	W0818 12:24:06.292080    3631 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:24:06.292102    3631 out.go:270] * 
	* 
	W0818 12:24:06.294605    3631 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:24:06.304947    3631 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-876000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-876000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-876000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.667792ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-876000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-876000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-876000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-876000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-876000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-876000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-876000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-876000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-876000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.720458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-876000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-876000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-876000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-876000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-876000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-876000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-18 12:24:06.443386 -0700 PDT m=+2810.203454126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-876000 -n docker-flags-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-876000 -n docker-flags-876000: exit status 7 (34.113584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-876000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-876000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-876000
--- FAIL: TestDockerFlags (10.23s)

                                                
                                    
x
+
TestForceSystemdFlag (10.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-574000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-574000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.18440275s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-574000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-574000" primary control-plane node in "force-systemd-flag-574000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-574000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:23:51.097186    3609 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:23:51.097347    3609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:51.097350    3609 out.go:358] Setting ErrFile to fd 2...
	I0818 12:23:51.097352    3609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:51.097478    3609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:23:51.098684    3609 out.go:352] Setting JSON to false
	I0818 12:23:51.116069    3609 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3201,"bootTime":1724005830,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:23:51.116155    3609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:23:51.122667    3609 out.go:177] * [force-systemd-flag-574000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:23:51.128656    3609 notify.go:220] Checking for updates...
	I0818 12:23:51.134586    3609 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:23:51.141506    3609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:23:51.149548    3609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:23:51.157556    3609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:23:51.166621    3609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:23:51.174566    3609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:23:51.178741    3609 config.go:182] Loaded profile config "force-systemd-env-172000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:51.178807    3609 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:51.178852    3609 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:23:51.183579    3609 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:23:51.191560    3609 start.go:297] selected driver: qemu2
	I0818 12:23:51.191565    3609 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:23:51.191571    3609 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:23:51.193972    3609 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:23:51.197614    3609 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:23:51.199058    3609 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 12:23:51.199082    3609 cni.go:84] Creating CNI manager for ""
	I0818 12:23:51.199092    3609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:23:51.199096    3609 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:23:51.199124    3609 start.go:340] cluster config:
	{Name:force-systemd-flag-574000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-574000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:23:51.203369    3609 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:51.211587    3609 out.go:177] * Starting "force-systemd-flag-574000" primary control-plane node in "force-systemd-flag-574000" cluster
	I0818 12:23:51.215584    3609 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:23:51.215605    3609 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:23:51.215615    3609 cache.go:56] Caching tarball of preloaded images
	I0818 12:23:51.215673    3609 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:23:51.215688    3609 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:23:51.215754    3609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/force-systemd-flag-574000/config.json ...
	I0818 12:23:51.215766    3609 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/force-systemd-flag-574000/config.json: {Name:mkf3c3751efca3b5e7dfc4eadeb0056285221b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:23:51.216017    3609 start.go:360] acquireMachinesLock for force-systemd-flag-574000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:51.216061    3609 start.go:364] duration metric: took 33.083µs to acquireMachinesLock for "force-systemd-flag-574000"
	I0818 12:23:51.216076    3609 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-574000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-574000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:51.216110    3609 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:51.224557    3609 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:23:51.245210    3609 start.go:159] libmachine.API.Create for "force-systemd-flag-574000" (driver="qemu2")
	I0818 12:23:51.245238    3609 client.go:168] LocalClient.Create starting
	I0818 12:23:51.245299    3609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:51.245336    3609 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:51.245344    3609 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:51.245386    3609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:51.245412    3609 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:51.245419    3609 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:51.245894    3609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:51.399908    3609 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:51.719212    3609 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:51.719222    3609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:51.719497    3609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2
	I0818 12:23:51.729458    3609 main.go:141] libmachine: STDOUT: 
	I0818 12:23:51.729478    3609 main.go:141] libmachine: STDERR: 
	I0818 12:23:51.729525    3609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2 +20000M
	I0818 12:23:51.737715    3609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:51.737737    3609 main.go:141] libmachine: STDERR: 
	I0818 12:23:51.737755    3609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2
	I0818 12:23:51.737761    3609 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:51.737772    3609 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:51.737807    3609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:02:89:3e:f5:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2
	I0818 12:23:51.739470    3609 main.go:141] libmachine: STDOUT: 
	I0818 12:23:51.739486    3609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:51.739507    3609 client.go:171] duration metric: took 494.268458ms to LocalClient.Create
	I0818 12:23:53.741671    3609 start.go:128] duration metric: took 2.525561541s to createHost
	I0818 12:23:53.741729    3609 start.go:83] releasing machines lock for "force-systemd-flag-574000", held for 2.525681042s
	W0818 12:23:53.741788    3609 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:53.762884    3609 out.go:177] * Deleting "force-systemd-flag-574000" in qemu2 ...
	W0818 12:23:53.785601    3609 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:53.785622    3609 start.go:729] Will try again in 5 seconds ...
	I0818 12:23:58.787852    3609 start.go:360] acquireMachinesLock for force-systemd-flag-574000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:58.835567    3609 start.go:364] duration metric: took 47.542709ms to acquireMachinesLock for "force-systemd-flag-574000"
	I0818 12:23:58.835728    3609 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-574000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-574000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:58.835976    3609 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:58.845400    3609 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:23:58.896821    3609 start.go:159] libmachine.API.Create for "force-systemd-flag-574000" (driver="qemu2")
	I0818 12:23:58.896872    3609 client.go:168] LocalClient.Create starting
	I0818 12:23:58.897006    3609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:58.897078    3609 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:58.897095    3609 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:58.897158    3609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:58.897203    3609 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:58.897217    3609 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:58.897779    3609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:59.073078    3609 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:59.183575    3609 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:59.183584    3609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:59.183783    3609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2
	I0818 12:23:59.193182    3609 main.go:141] libmachine: STDOUT: 
	I0818 12:23:59.193199    3609 main.go:141] libmachine: STDERR: 
	I0818 12:23:59.193249    3609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2 +20000M
	I0818 12:23:59.201141    3609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:59.201155    3609 main.go:141] libmachine: STDERR: 
	I0818 12:23:59.201167    3609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2
	I0818 12:23:59.201172    3609 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:59.201183    3609 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:59.201215    3609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:56:45:65:55:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-flag-574000/disk.qcow2
	I0818 12:23:59.202828    3609 main.go:141] libmachine: STDOUT: 
	I0818 12:23:59.202842    3609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:59.202855    3609 client.go:171] duration metric: took 305.980166ms to LocalClient.Create
	I0818 12:24:01.204326    3609 start.go:128] duration metric: took 2.368346416s to createHost
	I0818 12:24:01.204379    3609 start.go:83] releasing machines lock for "force-systemd-flag-574000", held for 2.368805667s
	W0818 12:24:01.204735    3609 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-574000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-574000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:24:01.218490    3609 out.go:201] 
	W0818 12:24:01.225422    3609 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:24:01.225449    3609 out.go:270] * 
	* 
	W0818 12:24:01.228507    3609 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:24:01.238328    3609 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-574000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-574000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-574000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.199542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-574000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-574000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-574000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-18 12:24:01.332498 -0700 PDT m=+2805.092521834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-574000 -n force-systemd-flag-574000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-574000 -n force-systemd-flag-574000: exit status 7 (33.9985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-574000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-574000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-574000
--- FAIL: TestForceSystemdFlag (10.37s)

                                                
                                    
x
+
TestForceSystemdEnv (10.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-172000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-172000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.686367s)

                                                
                                                
-- stdout --
	* [force-systemd-env-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-172000" primary control-plane node in "force-systemd-env-172000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-172000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:23:45.478159    3575 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:23:45.478292    3575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:45.478295    3575 out.go:358] Setting ErrFile to fd 2...
	I0818 12:23:45.478297    3575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:45.478435    3575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:23:45.479502    3575 out.go:352] Setting JSON to false
	I0818 12:23:45.495838    3575 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3195,"bootTime":1724005830,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:23:45.495908    3575 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:23:45.500318    3575 out.go:177] * [force-systemd-env-172000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:23:45.508244    3575 notify.go:220] Checking for updates...
	I0818 12:23:45.512334    3575 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:23:45.519264    3575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:23:45.527339    3575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:23:45.535272    3575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:23:45.543376    3575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:23:45.551287    3575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0818 12:23:45.555638    3575 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:45.555683    3575 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:23:45.560372    3575 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:23:45.567294    3575 start.go:297] selected driver: qemu2
	I0818 12:23:45.567299    3575 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:23:45.567303    3575 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:23:45.569488    3575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:23:45.573309    3575 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:23:45.577402    3575 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 12:23:45.577417    3575 cni.go:84] Creating CNI manager for ""
	I0818 12:23:45.577426    3575 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:23:45.577430    3575 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:23:45.577459    3575 start.go:340] cluster config:
	{Name:force-systemd-env-172000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:23:45.581142    3575 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:45.590311    3575 out.go:177] * Starting "force-systemd-env-172000" primary control-plane node in "force-systemd-env-172000" cluster
	I0818 12:23:45.594334    3575 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:23:45.594351    3575 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:23:45.594359    3575 cache.go:56] Caching tarball of preloaded images
	I0818 12:23:45.594420    3575 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:23:45.594426    3575 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:23:45.594486    3575 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/force-systemd-env-172000/config.json ...
	I0818 12:23:45.594498    3575 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/force-systemd-env-172000/config.json: {Name:mk06d0bf805c751ba3263f09ba3618f114b2cda5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:23:45.594709    3575 start.go:360] acquireMachinesLock for force-systemd-env-172000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:45.594746    3575 start.go:364] duration metric: took 29µs to acquireMachinesLock for "force-systemd-env-172000"
	I0818 12:23:45.594759    3575 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:45.594783    3575 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:45.602247    3575 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:23:45.619869    3575 start.go:159] libmachine.API.Create for "force-systemd-env-172000" (driver="qemu2")
	I0818 12:23:45.619906    3575 client.go:168] LocalClient.Create starting
	I0818 12:23:45.619982    3575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:45.620014    3575 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:45.620023    3575 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:45.620067    3575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:45.620090    3575 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:45.620098    3575 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:45.620458    3575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:45.774929    3575 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:45.825053    3575 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:45.825059    3575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:45.825238    3575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2
	I0818 12:23:45.834584    3575 main.go:141] libmachine: STDOUT: 
	I0818 12:23:45.834602    3575 main.go:141] libmachine: STDERR: 
	I0818 12:23:45.834651    3575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2 +20000M
	I0818 12:23:45.842797    3575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:45.842813    3575 main.go:141] libmachine: STDERR: 
	I0818 12:23:45.842831    3575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2
	I0818 12:23:45.842835    3575 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:45.842849    3575 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:45.842882    3575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:49:bd:15:05:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2
	I0818 12:23:45.844513    3575 main.go:141] libmachine: STDOUT: 
	I0818 12:23:45.844532    3575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:45.844549    3575 client.go:171] duration metric: took 224.637917ms to LocalClient.Create
	I0818 12:23:47.846624    3575 start.go:128] duration metric: took 2.251849167s to createHost
	I0818 12:23:47.846646    3575 start.go:83] releasing machines lock for "force-systemd-env-172000", held for 2.251914541s
	W0818 12:23:47.846668    3575 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:47.856919    3575 out.go:177] * Deleting "force-systemd-env-172000" in qemu2 ...
	W0818 12:23:47.867435    3575 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:47.867442    3575 start.go:729] Will try again in 5 seconds ...
	I0818 12:23:52.869718    3575 start.go:360] acquireMachinesLock for force-systemd-env-172000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:53.741881    3575 start.go:364] duration metric: took 872.049792ms to acquireMachinesLock for "force-systemd-env-172000"
	I0818 12:23:53.742028    3575 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-172000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-172000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:53.742374    3575 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:53.753937    3575 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:23:53.805766    3575 start.go:159] libmachine.API.Create for "force-systemd-env-172000" (driver="qemu2")
	I0818 12:23:53.805813    3575 client.go:168] LocalClient.Create starting
	I0818 12:23:53.805938    3575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:53.806005    3575 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:53.806024    3575 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:53.806085    3575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:53.806129    3575 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:53.806140    3575 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:53.806670    3575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:53.975686    3575 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:54.059702    3575 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:54.059708    3575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:54.059888    3575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2
	I0818 12:23:54.069299    3575 main.go:141] libmachine: STDOUT: 
	I0818 12:23:54.069316    3575 main.go:141] libmachine: STDERR: 
	I0818 12:23:54.069369    3575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2 +20000M
	I0818 12:23:54.077284    3575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:54.077298    3575 main.go:141] libmachine: STDERR: 
	I0818 12:23:54.077325    3575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2
	I0818 12:23:54.077330    3575 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:54.077340    3575 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:54.077370    3575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:2a:cc:a8:fd:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/force-systemd-env-172000/disk.qcow2
	I0818 12:23:54.079018    3575 main.go:141] libmachine: STDOUT: 
	I0818 12:23:54.079034    3575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:54.079055    3575 client.go:171] duration metric: took 273.239083ms to LocalClient.Create
	I0818 12:23:56.081289    3575 start.go:128] duration metric: took 2.338880958s to createHost
	I0818 12:23:56.081368    3575 start.go:83] releasing machines lock for "force-systemd-env-172000", held for 2.339449958s
	W0818 12:23:56.081727    3575 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-172000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:56.099447    3575 out.go:201] 
	W0818 12:23:56.109228    3575 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:23:56.109258    3575 out.go:270] * 
	* 
	W0818 12:23:56.111871    3575 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:23:56.122220    3575 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-172000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-172000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-172000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.404584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-172000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-172000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-172000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-18 12:23:56.216218 -0700 PDT m=+2799.976196751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-172000 -n force-systemd-env-172000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-172000 -n force-systemd-env-172000: exit status 7 (33.850708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-172000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-172000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-172000
--- FAIL: TestForceSystemdEnv (10.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-685000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-685000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-wmjnx" [897a1e9a-7b2c-4477-bd69-8d0c87a9ac89] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-wmjnx" [897a1e9a-7b2c-4477-bd69-8d0c87a9ac89] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003700875s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31994
functional_test.go:1661: error fetching http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31994: Get "http://192.168.105.4:31994": dial tcp 192.168.105.4:31994: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-685000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-wmjnx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-685000/192.168.105.4
Start Time:       Sun, 18 Aug 2024 11:48:21 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://87c91b59ba38a2bdb16600829ac1ec5c7be532e574dadfda3097ef9d6deca814
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sun, 18 Aug 2024 11:48:33 -0700
Finished:     Sun, 18 Aug 2024 11:48:33 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kz28q (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kz28q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  29s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-wmjnx to functional-685000
Normal   Pulled     17s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    17s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    17s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 27s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-wmjnx_default(897a1e9a-7b2c-4477-bd69-8d0c87a9ac89)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-685000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-685000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.0.15
IPs:                      10.111.0.15
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31994/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-685000 -n functional-685000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons    | functional-685000 addons list                                                                                        | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -o json                                                                                                              |                   |         |         |                     |                     |
	| service   | functional-685000 service                                                                                            | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port122789443/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh findmnt                                                                                        | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh -- ls                                                                                          | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh cat                                                                                            | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | /mount-9p/test-1724006922357724000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh stat                                                                                           | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh stat                                                                                           | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh sudo                                                                                           | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh findmnt                                                                                        | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2628426462/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh findmnt                                                                                        | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh -- ls                                                                                          | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh sudo                                                                                           | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh findmnt                                                                                        | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh findmnt                                                                                        | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-685000 ssh findmnt                                                                                        | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT | 18 Aug 24 11:48 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-685000 --dry-run                                                                                       | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-685000                                                                                                 | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-685000 | jenkins | v1.33.1 | 18 Aug 24 11:48 PDT |                     |
	|           | -p functional-685000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 11:48:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 11:48:50.764082    2166 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:48:50.764198    2166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:48:50.764208    2166 out.go:358] Setting ErrFile to fd 2...
	I0818 11:48:50.764212    2166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:48:50.764351    2166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 11:48:50.765727    2166 out.go:352] Setting JSON to false
	I0818 11:48:50.784405    2166 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1100,"bootTime":1724005830,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 11:48:50.784487    2166 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:48:50.788532    2166 out.go:177] * [functional-685000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0818 11:48:50.793553    2166 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 11:48:50.793605    2166 notify.go:220] Checking for updates...
	I0818 11:48:50.798662    2166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 11:48:50.805534    2166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 11:48:50.808550    2166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:48:50.811551    2166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 11:48:50.814552    2166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 11:48:50.818338    2166 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:48:50.818637    2166 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:48:50.822460    2166 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0818 11:48:50.829560    2166 start.go:297] selected driver: qemu2
	I0818 11:48:50.829568    2166 start.go:901] validating driver "qemu2" against &{Name:functional-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:48:50.829618    2166 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 11:48:50.836514    2166 out.go:201] 
	W0818 11:48:50.840548    2166 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0818 11:48:50.844463    2166 out.go:201] 
	
	
	==> Docker <==
	Aug 18 18:48:34 functional-685000 dockerd[5927]: time="2024-08-18T18:48:34.986122315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:35 functional-685000 cri-dockerd[6174]: time="2024-08-18T18:48:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/704ea5027da271804046cc134767289e03223eaf4efa960d5a7b9ab507bad183/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 18 18:48:35 functional-685000 cri-dockerd[6174]: time="2024-08-18T18:48:35Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Aug 18 18:48:35 functional-685000 dockerd[5927]: time="2024-08-18T18:48:35.823731201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:48:35 functional-685000 dockerd[5927]: time="2024-08-18T18:48:35.823951047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:48:35 functional-685000 dockerd[5927]: time="2024-08-18T18:48:35.824058970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:35 functional-685000 dockerd[5927]: time="2024-08-18T18:48:35.824133974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:43 functional-685000 dockerd[5927]: time="2024-08-18T18:48:43.720751588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:48:43 functional-685000 dockerd[5927]: time="2024-08-18T18:48:43.720965518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:48:43 functional-685000 dockerd[5927]: time="2024-08-18T18:48:43.720999353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:43 functional-685000 dockerd[5927]: time="2024-08-18T18:48:43.721078066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:43 functional-685000 cri-dockerd[6174]: time="2024-08-18T18:48:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bef3166ef3f756820b4b4c68f24d5d11a7214c35d938d7a951c7414c2758ea63/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 18 18:48:44 functional-685000 cri-dockerd[6174]: time="2024-08-18T18:48:44Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 18 18:48:45 functional-685000 dockerd[5927]: time="2024-08-18T18:48:45.021219628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:48:45 functional-685000 dockerd[5927]: time="2024-08-18T18:48:45.021250463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:48:45 functional-685000 dockerd[5927]: time="2024-08-18T18:48:45.021258714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:45 functional-685000 dockerd[5927]: time="2024-08-18T18:48:45.021288549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:45 functional-685000 dockerd[5927]: time="2024-08-18T18:48:45.054147849Z" level=info msg="shim disconnected" id=1b02ddc862fafc12be00da7c2debed8f51d9511c49ff1a20ef60712666ea705e namespace=moby
	Aug 18 18:48:45 functional-685000 dockerd[5927]: time="2024-08-18T18:48:45.054256105Z" level=warning msg="cleaning up after shim disconnected" id=1b02ddc862fafc12be00da7c2debed8f51d9511c49ff1a20ef60712666ea705e namespace=moby
	Aug 18 18:48:45 functional-685000 dockerd[5927]: time="2024-08-18T18:48:45.054281940Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 18:48:45 functional-685000 dockerd[5921]: time="2024-08-18T18:48:45.054390904Z" level=info msg="ignoring event" container=1b02ddc862fafc12be00da7c2debed8f51d9511c49ff1a20ef60712666ea705e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 18:48:46 functional-685000 dockerd[5921]: time="2024-08-18T18:48:46.910085527Z" level=info msg="ignoring event" container=bef3166ef3f756820b4b4c68f24d5d11a7214c35d938d7a951c7414c2758ea63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 18:48:46 functional-685000 dockerd[5927]: time="2024-08-18T18:48:46.910143947Z" level=info msg="shim disconnected" id=bef3166ef3f756820b4b4c68f24d5d11a7214c35d938d7a951c7414c2758ea63 namespace=moby
	Aug 18 18:48:46 functional-685000 dockerd[5927]: time="2024-08-18T18:48:46.910169781Z" level=warning msg="cleaning up after shim disconnected" id=bef3166ef3f756820b4b4c68f24d5d11a7214c35d938d7a951c7414c2758ea63 namespace=moby
	Aug 18 18:48:46 functional-685000 dockerd[5927]: time="2024-08-18T18:48:46.910175032Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1b02ddc862faf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 seconds ago        Exited              mount-munger              0                   bef3166ef3f75       busybox-mount
	a71b31a6b3079       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                         16 seconds ago       Running             myfrontend                0                   704ea5027da27       sp-pod
	87c91b59ba38a       72565bf5bbedf                                                                                         18 seconds ago       Exited              echoserver-arm            2                   7fefe2fac460d       hello-node-connect-65d86f57f4-wmjnx
	4884a22cdd09c       72565bf5bbedf                                                                                         24 seconds ago       Exited              echoserver-arm            2                   c71b8ef5eaf0a       hello-node-64b4f8f9ff-6hqtl
	bc1f5e57e8c5c       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                         36 seconds ago       Running             nginx                     0                   0cdc35f078f22       nginx-svc
	5728590f6a1b1       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   40b06f3241979       coredns-6f6b679f8f-vlk7g
	ba0ec80fde959       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       4                   7eb8e997c0343       storage-provisioner
	fbcca605cbf85       71d55d66fd4ee                                                                                         About a minute ago   Running             kube-proxy                2                   2c22dbe34139f       kube-proxy-vqh7s
	e6c3322754d40       fbbbd428abb4d                                                                                         About a minute ago   Running             kube-scheduler            2                   ccb30bd8d93e7       kube-scheduler-functional-685000
	ff68f96f691e4       fcb0683e6bdbd                                                                                         About a minute ago   Running             kube-controller-manager   2                   dd7d7c13bef3b       kube-controller-manager-functional-685000
	ade39405edd83       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   21a1e17b3d04b       etcd-functional-685000
	1c05dec7edff0       cd0f0ae0ec9e0                                                                                         About a minute ago   Running             kube-apiserver            0                   f1c2a36fe7536       kube-apiserver-functional-685000
	977a3f1ca2da0       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       3                   1fc240211713f       storage-provisioner
	f18aae01911ef       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   64808333d9b9f       coredns-6f6b679f8f-vlk7g
	03a84af7b6d17       71d55d66fd4ee                                                                                         About a minute ago   Exited              kube-proxy                1                   f3c133b154f7f       kube-proxy-vqh7s
	90567af9031c8       fbbbd428abb4d                                                                                         About a minute ago   Exited              kube-scheduler            1                   f6b759fecdb6b       kube-scheduler-functional-685000
	42833e23517d4       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   58e464f80b407       etcd-functional-685000
	0332ab819c1e5       fcb0683e6bdbd                                                                                         About a minute ago   Exited              kube-controller-manager   1                   074b98173ab29       kube-controller-manager-functional-685000
	
	
	==> coredns [5728590f6a1b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49144 - 53773 "HINFO IN 7926444181172922030.2347284060543877692. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008744718s
	[INFO] 10.244.0.1:51348 - 44870 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00010584s
	[INFO] 10.244.0.1:46353 - 53863 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000104839s
	[INFO] 10.244.0.1:32001 - 16266 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001523844s
	[INFO] 10.244.0.1:3981 - 57511 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000099381s
	[INFO] 10.244.0.1:6161 - 11888 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000098839s
	[INFO] 10.244.0.1:50545 - 7421 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00014905s
	
	
	==> coredns [f18aae01911e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45339 - 63410 "HINFO IN 3715519322601245725.5839089485721330605. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009944132s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-685000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-685000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=functional-685000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T11_46_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:45:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-685000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 18:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:48:45 +0000   Sun, 18 Aug 2024 18:45:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:48:45 +0000   Sun, 18 Aug 2024 18:45:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:48:45 +0000   Sun, 18 Aug 2024 18:45:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:48:45 +0000   Sun, 18 Aug 2024 18:46:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-685000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c87ad8ae761471f906a17d586cd7f9f
	  System UUID:                8c87ad8ae761471f906a17d586cd7f9f
	  Boot ID:                    6856c9de-cdcd-4941-95bb-7dc42cc91a91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-6hqtl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  default                     hello-node-connect-65d86f57f4-wmjnx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 coredns-6f6b679f8f-vlk7g                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m46s
	  kube-system                 etcd-functional-685000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m52s
	  kube-system                 kube-apiserver-functional-685000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-functional-685000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-proxy-vqh7s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-scheduler-functional-685000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m44s                  kube-proxy       
	  Normal  Starting                 64s                    kube-proxy       
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node functional-685000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node functional-685000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node functional-685000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m49s                  kubelet          Node functional-685000 status is now: NodeReady
	  Normal  RegisteredNode           2m48s                  node-controller  Node functional-685000 event: Registered Node functional-685000 in Controller
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)    kubelet          Node functional-685000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)    kubelet          Node functional-685000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 113s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     113s (x7 over 113s)    kubelet          Node functional-685000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                   node-controller  Node functional-685000 event: Registered Node functional-685000 in Controller
	  Normal  Starting                 70s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node functional-685000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node functional-685000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x7 over 70s)      kubelet          Node functional-685000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s                    node-controller  Node functional-685000 event: Registered Node functional-685000 in Controller
	
	
	==> dmesg <==
	[ +11.748368] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.387038] systemd-fstab-generator[4999]: Ignoring "noauto" option for root device
	[ +11.000761] systemd-fstab-generator[5443]: Ignoring "noauto" option for root device
	[  +0.053783] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.113518] systemd-fstab-generator[5477]: Ignoring "noauto" option for root device
	[  +0.109630] systemd-fstab-generator[5489]: Ignoring "noauto" option for root device
	[  +0.107893] systemd-fstab-generator[5503]: Ignoring "noauto" option for root device
	[  +5.102464] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.413145] systemd-fstab-generator[6127]: Ignoring "noauto" option for root device
	[  +0.091841] systemd-fstab-generator[6139]: Ignoring "noauto" option for root device
	[  +0.093425] systemd-fstab-generator[6151]: Ignoring "noauto" option for root device
	[  +0.098581] systemd-fstab-generator[6166]: Ignoring "noauto" option for root device
	[  +0.220889] systemd-fstab-generator[6330]: Ignoring "noauto" option for root device
	[  +0.846421] systemd-fstab-generator[6452]: Ignoring "noauto" option for root device
	[  +1.344256] kauditd_printk_skb: 194 callbacks suppressed
	[  +5.075040] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.209545] systemd-fstab-generator[7443]: Ignoring "noauto" option for root device
	[Aug18 18:48] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.349775] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.302944] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.056002] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.312919] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.753005] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.544449] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.036944] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [42833e23517d] <==
	{"level":"info","ts":"2024-08-18T18:46:59.930202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T18:46:59.930294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-18T18:46:59.930708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T18:46:59.930733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-18T18:46:59.930762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-18T18:46:59.930792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-18T18:46:59.935424Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-685000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T18:46:59.935894Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:46:59.936038Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T18:46:59.936085Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T18:46:59.936127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:46:59.938153Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:46:59.938153Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:46:59.940220Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-18T18:46:59.941817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T18:47:27.926228Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-18T18:47:27.926270Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-685000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-18T18:47:27.926316Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T18:47:27.926327Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T18:47:27.926348Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T18:47:27.926385Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T18:47:27.935672Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-18T18:47:27.937119Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-18T18:47:27.937161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-18T18:47:27.937169Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-685000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [ade39405edd8] <==
	{"level":"info","ts":"2024-08-18T18:47:42.813367Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-18T18:47:42.813396Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-18T18:47:42.813427Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-18T18:47:42.813555Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-18T18:47:42.813577Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-18T18:47:42.814285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-18T18:47:42.814358Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-18T18:47:42.814433Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:47:42.814480Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:47:43.907686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T18:47:43.907775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T18:47:43.907805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-18T18:47:43.908119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:43.908152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:43.908170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:43.908183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:43.910394Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-685000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T18:47:43.910517Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:47:43.910579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:47:43.913517Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:47:43.915258Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T18:47:43.915292Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T18:47:43.915187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-18T18:47:43.916495Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:47:43.918062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:48:51 up 3 min,  0 users,  load average: 1.01, 0.46, 0.18
	Linux functional-685000 5.10.207 #1 SMP PREEMPT Thu Aug 15 18:35:44 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1c05dec7edff] <==
	I0818 18:47:44.497017       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 18:47:44.506276       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 18:47:44.506390       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 18:47:44.506609       1 aggregator.go:171] initial CRD sync complete...
	I0818 18:47:44.506624       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 18:47:44.506628       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 18:47:44.506630       1 cache.go:39] Caches are synced for autoregister controller
	I0818 18:47:44.507306       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 18:47:44.507317       1 policy_source.go:224] refreshing policies
	E0818 18:47:44.508256       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0818 18:47:44.539124       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 18:47:45.392200       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0818 18:47:45.935464       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 18:47:45.941804       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 18:47:45.955318       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 18:47:45.962264       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 18:47:45.964199       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0818 18:47:48.079952       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 18:47:48.130025       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 18:48:01.371860       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.248.125"}
	I0818 18:48:06.677100       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0818 18:48:06.721618       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.85.211"}
	I0818 18:48:12.022723       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.217.127"}
	I0818 18:48:21.466171       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.0.15"}
	I0818 18:48:51.349823       1 controller.go:615] quota admission added evaluator for: namespaces
	
	
	==> kube-controller-manager [0332ab819c1e] <==
	I0818 18:47:03.789186       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0818 18:47:03.789199       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0818 18:47:03.789228       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0818 18:47:03.789249       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0818 18:47:03.789294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-685000"
	I0818 18:47:03.789345       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0818 18:47:03.803636       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0818 18:47:03.815859       1 shared_informer.go:320] Caches are synced for stateful set
	I0818 18:47:03.815905       1 shared_informer.go:320] Caches are synced for PVC protection
	I0818 18:47:03.815863       1 shared_informer.go:320] Caches are synced for job
	I0818 18:47:03.817106       1 shared_informer.go:320] Caches are synced for taint
	I0818 18:47:03.817157       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0818 18:47:03.817219       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-685000"
	I0818 18:47:03.817275       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0818 18:47:03.817500       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0818 18:47:03.823863       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0818 18:47:03.888708       1 shared_informer.go:320] Caches are synced for attach detach
	I0818 18:47:03.968594       1 shared_informer.go:320] Caches are synced for endpoint
	I0818 18:47:03.973672       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 18:47:04.016669       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0818 18:47:04.016770       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0818 18:47:04.018932       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 18:47:04.433241       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 18:47:04.479211       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 18:47:04.479275       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ff68f96f691e] <==
	I0818 18:48:33.832516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.543µs"
	I0818 18:48:34.576891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="58.628µs"
	I0818 18:48:40.838439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="42.461µs"
	I0818 18:48:45.451672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-685000"
	I0818 18:48:46.845441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="51.169µs"
	I0818 18:48:51.433952       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="25.679394ms"
	E0818 18:48:51.433970       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0818 18:48:51.446815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="28.877865ms"
	E0818 18:48:51.446875       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0818 18:48:51.448043       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.051229ms"
	E0818 18:48:51.448056       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0818 18:48:51.458649       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.320354ms"
	E0818 18:48:51.458668       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0818 18:48:51.458728       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.601759ms"
	E0818 18:48:51.458737       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0818 18:48:51.464011       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.189651ms"
	E0818 18:48:51.464025       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0818 18:48:51.464205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.27028ms"
	E0818 18:48:51.464250       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0818 18:48:51.489050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.880348ms"
	I0818 18:48:51.514350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="33.929231ms"
	I0818 18:48:51.517232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="27.989565ms"
	I0818 18:48:51.517370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="33.044µs"
	I0818 18:48:51.528614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.142166ms"
	I0818 18:48:51.528713       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="78.587µs"
	
	
	==> kube-proxy [03a84af7b6d1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:47:01.585459       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:47:01.588582       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0818 18:47:01.588622       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:47:01.609383       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:47:01.609402       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:47:01.609416       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:47:01.610103       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:47:01.610195       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:47:01.610199       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:47:01.610710       1 config.go:197] "Starting service config controller"
	I0818 18:47:01.610754       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:47:01.610778       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:47:01.610808       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:47:01.611028       1 config.go:326] "Starting node config controller"
	I0818 18:47:01.611059       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:47:01.713684       1 shared_informer.go:320] Caches are synced for node config
	I0818 18:47:01.713717       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:47:01.713730       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fbcca605cbf8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:47:46.371039       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:47:46.374488       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0818 18:47:46.374515       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:47:46.381980       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:47:46.381993       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:47:46.382003       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:47:46.382528       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:47:46.382602       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:47:46.382614       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:47:46.383085       1 config.go:197] "Starting service config controller"
	I0818 18:47:46.383124       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:47:46.383151       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:47:46.383183       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:47:46.383400       1 config.go:326] "Starting node config controller"
	I0818 18:47:46.383421       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:47:46.483674       1 shared_informer.go:320] Caches are synced for node config
	I0818 18:47:46.483716       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 18:47:46.483674       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [90567af9031c] <==
	I0818 18:46:59.144040       1 serving.go:386] Generated self-signed cert in-memory
	W0818 18:47:00.464540       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 18:47:00.464666       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 18:47:00.464694       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 18:47:00.464712       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 18:47:00.476868       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 18:47:00.476971       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:47:00.477996       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 18:47:00.478033       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 18:47:00.478253       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 18:47:00.478296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 18:47:00.578276       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 18:47:27.935471       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0818 18:47:27.935492       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e6c3322754d4] <==
	I0818 18:47:43.400622       1 serving.go:386] Generated self-signed cert in-memory
	W0818 18:47:44.415691       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 18:47:44.415710       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 18:47:44.415714       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 18:47:44.415717       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 18:47:44.432750       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 18:47:44.432847       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:47:44.433798       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 18:47:44.435508       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 18:47:44.435544       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 18:47:44.435565       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 18:47:44.539271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 18:48:40 functional-685000 kubelet[6459]: E0818 18:48:40.827506    6459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-6hqtl_default(f8bc0a3f-c956-409e-8acb-78279b6327c0)\"" pod="default/hello-node-64b4f8f9ff-6hqtl" podUID="f8bc0a3f-c956-409e-8acb-78279b6327c0"
	Aug 18 18:48:41 functional-685000 kubelet[6459]: E0818 18:48:41.831753    6459 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 18:48:41 functional-685000 kubelet[6459]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 18:48:41 functional-685000 kubelet[6459]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 18:48:41 functional-685000 kubelet[6459]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 18:48:41 functional-685000 kubelet[6459]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 18:48:41 functional-685000 kubelet[6459]: I0818 18:48:41.908761    6459 scope.go:117] "RemoveContainer" containerID="8f5a889a1a18c6e773f9814f4458933f3665f1ae4b5cd41cde6528eb16f2c35b"
	Aug 18 18:48:43 functional-685000 kubelet[6459]: I0818 18:48:43.479656    6459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcsbb\" (UniqueName: \"kubernetes.io/projected/d748de01-96a9-4c32-8df6-899a9cd0de31-kube-api-access-qcsbb\") pod \"busybox-mount\" (UID: \"d748de01-96a9-4c32-8df6-899a9cd0de31\") " pod="default/busybox-mount"
	Aug 18 18:48:43 functional-685000 kubelet[6459]: I0818 18:48:43.479688    6459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d748de01-96a9-4c32-8df6-899a9cd0de31-test-volume\") pod \"busybox-mount\" (UID: \"d748de01-96a9-4c32-8df6-899a9cd0de31\") " pod="default/busybox-mount"
	Aug 18 18:48:43 functional-685000 kubelet[6459]: I0818 18:48:43.757636    6459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef3166ef3f756820b4b4c68f24d5d11a7214c35d938d7a951c7414c2758ea63"
	Aug 18 18:48:46 functional-685000 kubelet[6459]: I0818 18:48:46.830278    6459 scope.go:117] "RemoveContainer" containerID="87c91b59ba38a2bdb16600829ac1ec5c7be532e574dadfda3097ef9d6deca814"
	Aug 18 18:48:46 functional-685000 kubelet[6459]: E0818 18:48:46.835846    6459 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-wmjnx_default(897a1e9a-7b2c-4477-bd69-8d0c87a9ac89)\"" pod="default/hello-node-connect-65d86f57f4-wmjnx" podUID="897a1e9a-7b2c-4477-bd69-8d0c87a9ac89"
	Aug 18 18:48:47 functional-685000 kubelet[6459]: I0818 18:48:47.020005    6459 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d748de01-96a9-4c32-8df6-899a9cd0de31-test-volume\") pod \"d748de01-96a9-4c32-8df6-899a9cd0de31\" (UID: \"d748de01-96a9-4c32-8df6-899a9cd0de31\") "
	Aug 18 18:48:47 functional-685000 kubelet[6459]: I0818 18:48:47.020033    6459 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcsbb\" (UniqueName: \"kubernetes.io/projected/d748de01-96a9-4c32-8df6-899a9cd0de31-kube-api-access-qcsbb\") pod \"d748de01-96a9-4c32-8df6-899a9cd0de31\" (UID: \"d748de01-96a9-4c32-8df6-899a9cd0de31\") "
	Aug 18 18:48:47 functional-685000 kubelet[6459]: I0818 18:48:47.020200    6459 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d748de01-96a9-4c32-8df6-899a9cd0de31-test-volume" (OuterVolumeSpecName: "test-volume") pod "d748de01-96a9-4c32-8df6-899a9cd0de31" (UID: "d748de01-96a9-4c32-8df6-899a9cd0de31"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 18 18:48:47 functional-685000 kubelet[6459]: I0818 18:48:47.023401    6459 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d748de01-96a9-4c32-8df6-899a9cd0de31-kube-api-access-qcsbb" (OuterVolumeSpecName: "kube-api-access-qcsbb") pod "d748de01-96a9-4c32-8df6-899a9cd0de31" (UID: "d748de01-96a9-4c32-8df6-899a9cd0de31"). InnerVolumeSpecName "kube-api-access-qcsbb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 18 18:48:47 functional-685000 kubelet[6459]: I0818 18:48:47.123735    6459 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qcsbb\" (UniqueName: \"kubernetes.io/projected/d748de01-96a9-4c32-8df6-899a9cd0de31-kube-api-access-qcsbb\") on node \"functional-685000\" DevicePath \"\""
	Aug 18 18:48:47 functional-685000 kubelet[6459]: I0818 18:48:47.123758    6459 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d748de01-96a9-4c32-8df6-899a9cd0de31-test-volume\") on node \"functional-685000\" DevicePath \"\""
	Aug 18 18:48:47 functional-685000 kubelet[6459]: I0818 18:48:47.846063    6459 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef3166ef3f756820b4b4c68f24d5d11a7214c35d938d7a951c7414c2758ea63"
	Aug 18 18:48:51 functional-685000 kubelet[6459]: E0818 18:48:51.484557    6459 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d748de01-96a9-4c32-8df6-899a9cd0de31" containerName="mount-munger"
	Aug 18 18:48:51 functional-685000 kubelet[6459]: I0818 18:48:51.484596    6459 memory_manager.go:354] "RemoveStaleState removing state" podUID="d748de01-96a9-4c32-8df6-899a9cd0de31" containerName="mount-munger"
	Aug 18 18:48:51 functional-685000 kubelet[6459]: I0818 18:48:51.558286    6459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/23b76b74-6fb7-4d5a-b5b9-56f486e1dca2-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-llrps\" (UID: \"23b76b74-6fb7-4d5a-b5b9-56f486e1dca2\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-llrps"
	Aug 18 18:48:51 functional-685000 kubelet[6459]: I0818 18:48:51.558310    6459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7hpg\" (UniqueName: \"kubernetes.io/projected/23b76b74-6fb7-4d5a-b5b9-56f486e1dca2-kube-api-access-p7hpg\") pod \"kubernetes-dashboard-695b96c756-llrps\" (UID: \"23b76b74-6fb7-4d5a-b5b9-56f486e1dca2\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-llrps"
	Aug 18 18:48:51 functional-685000 kubelet[6459]: I0818 18:48:51.660621    6459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6cea0ba3-1604-42bb-b7a2-70ab344a7d92-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-p7kcb\" (UID: \"6cea0ba3-1604-42bb-b7a2-70ab344a7d92\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-p7kcb"
	Aug 18 18:48:51 functional-685000 kubelet[6459]: I0818 18:48:51.660654    6459 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbj4x\" (UniqueName: \"kubernetes.io/projected/6cea0ba3-1604-42bb-b7a2-70ab344a7d92-kube-api-access-sbj4x\") pod \"dashboard-metrics-scraper-c5db448b4-p7kcb\" (UID: \"6cea0ba3-1604-42bb-b7a2-70ab344a7d92\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-p7kcb"
	
	
	==> storage-provisioner [977a3f1ca2da] <==
	I0818 18:47:13.087275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 18:47:13.092123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 18:47:13.092141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [ba0ec80fde95] <==
	I0818 18:47:46.331421       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 18:47:46.342719       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 18:47:46.342783       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 18:48:03.766587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 18:48:03.766952       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-685000_39cf4b97-5091-4654-9d02-d4d2c5c55f4d!
	I0818 18:48:03.767851       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97dd3a83-fb41-494c-855a-9d3e862cf5bc", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-685000_39cf4b97-5091-4654-9d02-d4d2c5c55f4d became leader
	I0818 18:48:03.867501       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-685000_39cf4b97-5091-4654-9d02-d4d2c5c55f4d!
	I0818 18:48:23.393930       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0818 18:48:23.393964       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    44390351-2056-4e17-a0c8-bd6df1a2992c 341 0 2024-08-18 18:46:05 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-18 18:46:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-628d7677-ea60-4c41-9a48-8b24447b493c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  628d7677-ea60-4c41-9a48-8b24447b493c 740 0 2024-08-18 18:48:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-18 18:48:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-18 18:48:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0818 18:48:23.394563       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-628d7677-ea60-4c41-9a48-8b24447b493c" provisioned
	I0818 18:48:23.394577       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0818 18:48:23.394580       1 volume_store.go:212] Trying to save persistentvolume "pvc-628d7677-ea60-4c41-9a48-8b24447b493c"
	I0818 18:48:23.395010       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"628d7677-ea60-4c41-9a48-8b24447b493c", APIVersion:"v1", ResourceVersion:"740", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0818 18:48:23.402450       1 volume_store.go:219] persistentvolume "pvc-628d7677-ea60-4c41-9a48-8b24447b493c" saved
	I0818 18:48:23.402763       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"628d7677-ea60-4c41-9a48-8b24447b493c", APIVersion:"v1", ResourceVersion:"740", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-628d7677-ea60-4c41-9a48-8b24447b493c
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-685000 -n functional-685000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-685000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-p7kcb kubernetes-dashboard-695b96c756-llrps
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-685000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-p7kcb kubernetes-dashboard-695b96c756-llrps
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-685000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-p7kcb kubernetes-dashboard-695b96c756-llrps: exit status 1 (43.783292ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-685000/192.168.105.4
	Start Time:       Sun, 18 Aug 2024 11:48:43 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://1b02ddc862fafc12be00da7c2debed8f51d9511c49ff1a20ef60712666ea705e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 18 Aug 2024 11:48:45 -0700
	      Finished:     Sun, 18 Aug 2024 11:48:45 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qcsbb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qcsbb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/busybox-mount to functional-685000
	  Normal  Pulling    8s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.216s (1.216s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-p7kcb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-llrps" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-685000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-p7kcb kubernetes-dashboard-695b96c756-llrps: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 node stop m02 -v=7 --alsologtostderr
E0818 11:53:06.694518    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:06.701653    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:06.715165    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:06.737048    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:06.780450    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:06.863815    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:07.025536    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:07.349001    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:07.992530    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:09.275994    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:11.838132    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:16.960926    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-108000 node stop m02 -v=7 --alsologtostderr: (12.185327458s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr
E0818 11:53:27.204359    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:53:47.686191    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:54:28.648195    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:55:50.571097    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:56:08.666930    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr: exit status 7 (3m45.048859791s)

                                                
                                                
-- stdout --
	ha-108000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-108000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-108000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-108000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 11:53:17.689446    2444 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:53:17.689603    2444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:53:17.689606    2444 out.go:358] Setting ErrFile to fd 2...
	I0818 11:53:17.689609    2444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:53:17.689732    2444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 11:53:17.689842    2444 out.go:352] Setting JSON to false
	I0818 11:53:17.689853    2444 mustload.go:65] Loading cluster: ha-108000
	I0818 11:53:17.689919    2444 notify.go:220] Checking for updates...
	I0818 11:53:17.690095    2444 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:53:17.690103    2444 status.go:255] checking status of ha-108000 ...
	I0818 11:53:17.690775    2444 status.go:330] ha-108000 host status = "Running" (err=<nil>)
	I0818 11:53:17.690781    2444 host.go:66] Checking if "ha-108000" exists ...
	I0818 11:53:17.690894    2444 host.go:66] Checking if "ha-108000" exists ...
	I0818 11:53:17.691007    2444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 11:53:17.691015    2444 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/id_rsa Username:docker}
	W0818 11:54:32.691146    2444 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0818 11:54:32.691222    2444 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0818 11:54:32.691236    2444 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0818 11:54:32.691241    2444 status.go:257] ha-108000 status: &{Name:ha-108000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 11:54:32.691250    2444 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0818 11:54:32.691254    2444 status.go:255] checking status of ha-108000-m02 ...
	I0818 11:54:32.691437    2444 status.go:330] ha-108000-m02 host status = "Stopped" (err=<nil>)
	I0818 11:54:32.691442    2444 status.go:343] host is not running, skipping remaining checks
	I0818 11:54:32.691444    2444 status.go:257] ha-108000-m02 status: &{Name:ha-108000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 11:54:32.691450    2444 status.go:255] checking status of ha-108000-m03 ...
	I0818 11:54:32.692057    2444 status.go:330] ha-108000-m03 host status = "Running" (err=<nil>)
	I0818 11:54:32.692063    2444 host.go:66] Checking if "ha-108000-m03" exists ...
	I0818 11:54:32.692158    2444 host.go:66] Checking if "ha-108000-m03" exists ...
	I0818 11:54:32.692277    2444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 11:54:32.692286    2444 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m03/id_rsa Username:docker}
	W0818 11:55:47.693859    2444 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0818 11:55:47.694000    2444 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0818 11:55:47.694018    2444 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0818 11:55:47.694022    2444 status.go:257] ha-108000-m03 status: &{Name:ha-108000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 11:55:47.694032    2444 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0818 11:55:47.694036    2444 status.go:255] checking status of ha-108000-m04 ...
	I0818 11:55:47.694971    2444 status.go:330] ha-108000-m04 host status = "Running" (err=<nil>)
	I0818 11:55:47.694982    2444 host.go:66] Checking if "ha-108000-m04" exists ...
	I0818 11:55:47.695087    2444 host.go:66] Checking if "ha-108000-m04" exists ...
	I0818 11:55:47.695205    2444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 11:55:47.695213    2444 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m04/id_rsa Username:docker}
	W0818 11:57:02.697328    2444 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0818 11:57:02.697531    2444 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0818 11:57:02.697572    2444 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0818 11:57:02.697591    2444 status.go:257] ha-108000-m04 status: &{Name:ha-108000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0818 11:57:02.697636    2444 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr": ha-108000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-108000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-108000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-108000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr": ha-108000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-108000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-108000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-108000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr": ha-108000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-108000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-108000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-108000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000
E0818 11:58:06.692337    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000: exit status 3 (1m15.072875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 11:58:17.772172    2469 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0818 11:58:17.772207    2469 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-108000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0818 11:58:34.413684    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.086087041s)
ha_test.go:413: expected profile "ha-108000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-108000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-108000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-108000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000
E0818 12:01:08.671744    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000: exit status 3 (1m15.0422655s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:02:02.912316    2780 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0818 12:02:02.912357    2780 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-108000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-108000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.131169583s)

                                                
                                                
-- stdout --
	* Starting "ha-108000-m02" control-plane node in "ha-108000" cluster
	* Restarting existing qemu2 VM for "ha-108000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-108000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:02:02.986764    2790 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:02:02.987065    2790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:02:02.987069    2790 out.go:358] Setting ErrFile to fd 2...
	I0818 12:02:02.987072    2790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:02:02.987241    2790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:02:02.987559    2790 mustload.go:65] Loading cluster: ha-108000
	I0818 12:02:02.987861    2790 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0818 12:02:02.988177    2790 host.go:58] "ha-108000-m02" host status: Stopped
	I0818 12:02:02.991589    2790 out.go:177] * Starting "ha-108000-m02" control-plane node in "ha-108000" cluster
	I0818 12:02:02.994605    2790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:02:02.994621    2790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:02:02.994628    2790 cache.go:56] Caching tarball of preloaded images
	I0818 12:02:02.994712    2790 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:02:02.994719    2790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:02:02.994818    2790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/ha-108000/config.json ...
	I0818 12:02:02.995161    2790 start.go:360] acquireMachinesLock for ha-108000-m02: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:02:02.995214    2790 start.go:364] duration metric: took 37.583µs to acquireMachinesLock for "ha-108000-m02"
	I0818 12:02:02.995224    2790 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:02:02.995230    2790 fix.go:54] fixHost starting: m02
	I0818 12:02:02.995396    2790 fix.go:112] recreateIfNeeded on ha-108000-m02: state=Stopped err=<nil>
	W0818 12:02:02.995403    2790 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:02:02.999536    2790 out.go:177] * Restarting existing qemu2 VM for "ha-108000-m02" ...
	I0818 12:02:03.003568    2790 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:02:03.003616    2790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:59:53:0b:4f:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/disk.qcow2
	I0818 12:02:03.006186    2790 main.go:141] libmachine: STDOUT: 
	I0818 12:02:03.006219    2790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:02:03.006253    2790 fix.go:56] duration metric: took 11.022583ms for fixHost
	I0818 12:02:03.006259    2790 start.go:83] releasing machines lock for "ha-108000-m02", held for 11.040042ms
	W0818 12:02:03.006269    2790 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:02:03.006312    2790 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:02:03.006318    2790 start.go:729] Will try again in 5 seconds ...
	I0818 12:02:08.008606    2790 start.go:360] acquireMachinesLock for ha-108000-m02: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:02:08.009063    2790 start.go:364] duration metric: took 348.625µs to acquireMachinesLock for "ha-108000-m02"
	I0818 12:02:08.009183    2790 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:02:08.009198    2790 fix.go:54] fixHost starting: m02
	I0818 12:02:08.009696    2790 fix.go:112] recreateIfNeeded on ha-108000-m02: state=Stopped err=<nil>
	W0818 12:02:08.009712    2790 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:02:08.014487    2790 out.go:177] * Restarting existing qemu2 VM for "ha-108000-m02" ...
	I0818 12:02:08.018490    2790 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:02:08.018649    2790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:59:53:0b:4f:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/disk.qcow2
	I0818 12:02:08.025001    2790 main.go:141] libmachine: STDOUT: 
	I0818 12:02:08.025058    2790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:02:08.025120    2790 fix.go:56] duration metric: took 15.923292ms for fixHost
	I0818 12:02:08.025132    2790 start.go:83] releasing machines lock for "ha-108000-m02", held for 16.054167ms
	W0818 12:02:08.025358    2790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:02:08.029474    2790 out.go:201] 
	W0818 12:02:08.033409    2790 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:02:08.033423    2790 out.go:270] * 
	* 
	W0818 12:02:08.038922    2790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:02:08.043499    2790 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0818 12:02:02.986764    2790 out.go:345] Setting OutFile to fd 1 ...
I0818 12:02:02.987065    2790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 12:02:02.987069    2790 out.go:358] Setting ErrFile to fd 2...
I0818 12:02:02.987072    2790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 12:02:02.987241    2790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
I0818 12:02:02.987559    2790 mustload.go:65] Loading cluster: ha-108000
I0818 12:02:02.987861    2790 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0818 12:02:02.988177    2790 host.go:58] "ha-108000-m02" host status: Stopped
I0818 12:02:02.991589    2790 out.go:177] * Starting "ha-108000-m02" control-plane node in "ha-108000" cluster
I0818 12:02:02.994605    2790 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0818 12:02:02.994621    2790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0818 12:02:02.994628    2790 cache.go:56] Caching tarball of preloaded images
I0818 12:02:02.994712    2790 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0818 12:02:02.994719    2790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0818 12:02:02.994818    2790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/ha-108000/config.json ...
I0818 12:02:02.995161    2790 start.go:360] acquireMachinesLock for ha-108000-m02: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0818 12:02:02.995214    2790 start.go:364] duration metric: took 37.583µs to acquireMachinesLock for "ha-108000-m02"
I0818 12:02:02.995224    2790 start.go:96] Skipping create...Using existing machine configuration
I0818 12:02:02.995230    2790 fix.go:54] fixHost starting: m02
I0818 12:02:02.995396    2790 fix.go:112] recreateIfNeeded on ha-108000-m02: state=Stopped err=<nil>
W0818 12:02:02.995403    2790 fix.go:138] unexpected machine state, will restart: <nil>
I0818 12:02:02.999536    2790 out.go:177] * Restarting existing qemu2 VM for "ha-108000-m02" ...
I0818 12:02:03.003568    2790 qemu.go:418] Using hvf for hardware acceleration
I0818 12:02:03.003616    2790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:59:53:0b:4f:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/disk.qcow2
I0818 12:02:03.006186    2790 main.go:141] libmachine: STDOUT: 
I0818 12:02:03.006219    2790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0818 12:02:03.006253    2790 fix.go:56] duration metric: took 11.022583ms for fixHost
I0818 12:02:03.006259    2790 start.go:83] releasing machines lock for "ha-108000-m02", held for 11.040042ms
W0818 12:02:03.006269    2790 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0818 12:02:03.006312    2790 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0818 12:02:03.006318    2790 start.go:729] Will try again in 5 seconds ...
I0818 12:02:08.008606    2790 start.go:360] acquireMachinesLock for ha-108000-m02: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0818 12:02:08.009063    2790 start.go:364] duration metric: took 348.625µs to acquireMachinesLock for "ha-108000-m02"
I0818 12:02:08.009183    2790 start.go:96] Skipping create...Using existing machine configuration
I0818 12:02:08.009198    2790 fix.go:54] fixHost starting: m02
I0818 12:02:08.009696    2790 fix.go:112] recreateIfNeeded on ha-108000-m02: state=Stopped err=<nil>
W0818 12:02:08.009712    2790 fix.go:138] unexpected machine state, will restart: <nil>
I0818 12:02:08.014487    2790 out.go:177] * Restarting existing qemu2 VM for "ha-108000-m02" ...
I0818 12:02:08.018490    2790 qemu.go:418] Using hvf for hardware acceleration
I0818 12:02:08.018649    2790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:59:53:0b:4f:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/disk.qcow2
I0818 12:02:08.025001    2790 main.go:141] libmachine: STDOUT: 
I0818 12:02:08.025058    2790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0818 12:02:08.025120    2790 fix.go:56] duration metric: took 15.923292ms for fixHost
I0818 12:02:08.025132    2790 start.go:83] releasing machines lock for "ha-108000-m02", held for 16.054167ms
W0818 12:02:08.025358    2790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0818 12:02:08.029474    2790 out.go:201] 
W0818 12:02:08.033409    2790 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0818 12:02:08.033423    2790 out.go:270] * 
* 
W0818 12:02:08.038922    2790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0818 12:02:08.043499    2790 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-108000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr
E0818 12:02:31.770764    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:03:06.704173    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr: exit status 7 (3m45.072235916s)

                                                
                                                
-- stdout --
	ha-108000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-108000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-108000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-108000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:02:08.102723    2794 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:02:08.102927    2794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:02:08.102931    2794 out.go:358] Setting ErrFile to fd 2...
	I0818 12:02:08.102934    2794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:02:08.103085    2794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:02:08.103224    2794 out.go:352] Setting JSON to false
	I0818 12:02:08.103242    2794 mustload.go:65] Loading cluster: ha-108000
	I0818 12:02:08.103284    2794 notify.go:220] Checking for updates...
	I0818 12:02:08.103505    2794 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:02:08.103514    2794 status.go:255] checking status of ha-108000 ...
	I0818 12:02:08.104357    2794 status.go:330] ha-108000 host status = "Running" (err=<nil>)
	I0818 12:02:08.104366    2794 host.go:66] Checking if "ha-108000" exists ...
	I0818 12:02:08.104493    2794 host.go:66] Checking if "ha-108000" exists ...
	I0818 12:02:08.104627    2794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:02:08.104637    2794 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/id_rsa Username:docker}
	W0818 12:03:23.106751    2794 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0818 12:03:23.106964    2794 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0818 12:03:23.106993    2794 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0818 12:03:23.107006    2794 status.go:257] ha-108000 status: &{Name:ha-108000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 12:03:23.107031    2794 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0818 12:03:23.107043    2794 status.go:255] checking status of ha-108000-m02 ...
	I0818 12:03:23.107702    2794 status.go:330] ha-108000-m02 host status = "Stopped" (err=<nil>)
	I0818 12:03:23.107717    2794 status.go:343] host is not running, skipping remaining checks
	I0818 12:03:23.107726    2794 status.go:257] ha-108000-m02 status: &{Name:ha-108000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:03:23.107743    2794 status.go:255] checking status of ha-108000-m03 ...
	I0818 12:03:23.109588    2794 status.go:330] ha-108000-m03 host status = "Running" (err=<nil>)
	I0818 12:03:23.109610    2794 host.go:66] Checking if "ha-108000-m03" exists ...
	I0818 12:03:23.110027    2794 host.go:66] Checking if "ha-108000-m03" exists ...
	I0818 12:03:23.110438    2794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:03:23.110458    2794 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m03/id_rsa Username:docker}
	W0818 12:04:38.112286    2794 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0818 12:04:38.112403    2794 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0818 12:04:38.112425    2794 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0818 12:04:38.112436    2794 status.go:257] ha-108000-m03 status: &{Name:ha-108000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 12:04:38.112458    2794 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0818 12:04:38.112475    2794 status.go:255] checking status of ha-108000-m04 ...
	I0818 12:04:38.114194    2794 status.go:330] ha-108000-m04 host status = "Running" (err=<nil>)
	I0818 12:04:38.114210    2794 host.go:66] Checking if "ha-108000-m04" exists ...
	I0818 12:04:38.114486    2794 host.go:66] Checking if "ha-108000-m04" exists ...
	I0818 12:04:38.114799    2794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:04:38.114814    2794 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m04/id_rsa Username:docker}
	W0818 12:05:53.116820    2794 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0818 12:05:53.116976    2794 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0818 12:05:53.117004    2794 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0818 12:05:53.117018    2794 status.go:257] ha-108000-m04 status: &{Name:ha-108000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0818 12:05:53.117050    2794 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000
E0818 12:06:08.677229    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000: exit status 3 (1m15.068579084s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:07:08.185061    2809 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0818 12:07:08.185096    2809 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-108000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-108000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-108000 -v=7 --alsologtostderr
E0818 12:11:08.675568    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:13:06.701026    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-108000 -v=7 --alsologtostderr: (5m27.170876208s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-108000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-108000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.219785667s)

                                                
                                                
-- stdout --
	* [ha-108000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-108000" primary control-plane node in "ha-108000" cluster
	* Restarting existing qemu2 VM for "ha-108000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-108000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:15:05.595650    2900 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:15:05.595817    2900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:05.595821    2900 out.go:358] Setting ErrFile to fd 2...
	I0818 12:15:05.595825    2900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:05.595995    2900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:15:05.597306    2900 out.go:352] Setting JSON to false
	I0818 12:15:05.617276    2900 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2675,"bootTime":1724005830,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:15:05.617340    2900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:15:05.621911    2900 out.go:177] * [ha-108000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:15:05.628933    2900 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:15:05.629009    2900 notify.go:220] Checking for updates...
	I0818 12:15:05.637778    2900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:15:05.641759    2900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:15:05.644768    2900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:15:05.647748    2900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:15:05.650817    2900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:15:05.652468    2900 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:15:05.652520    2900 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:15:05.656730    2900 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:15:05.663551    2900 start.go:297] selected driver: qemu2
	I0818 12:15:05.663558    2900 start.go:901] validating driver "qemu2" against &{Name:ha-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-108000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:15:05.663628    2900 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:15:05.666246    2900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:15:05.666274    2900 cni.go:84] Creating CNI manager for ""
	I0818 12:15:05.666283    2900 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 12:15:05.666364    2900 start.go:340] cluster config:
	{Name:ha-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-108000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:15:05.670277    2900 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:15:05.678721    2900 out.go:177] * Starting "ha-108000" primary control-plane node in "ha-108000" cluster
	I0818 12:15:05.682757    2900 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:15:05.682776    2900 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:15:05.682789    2900 cache.go:56] Caching tarball of preloaded images
	I0818 12:15:05.682857    2900 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:15:05.682867    2900 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:15:05.682948    2900 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/ha-108000/config.json ...
	I0818 12:15:05.683428    2900 start.go:360] acquireMachinesLock for ha-108000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:15:05.683463    2900 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "ha-108000"
	I0818 12:15:05.683473    2900 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:15:05.683478    2900 fix.go:54] fixHost starting: 
	I0818 12:15:05.683606    2900 fix.go:112] recreateIfNeeded on ha-108000: state=Stopped err=<nil>
	W0818 12:15:05.683614    2900 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:15:05.687783    2900 out.go:177] * Restarting existing qemu2 VM for "ha-108000" ...
	I0818 12:15:05.695773    2900 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:15:05.695815    2900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:4c:0c:59:1d:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/disk.qcow2
	I0818 12:15:05.697891    2900 main.go:141] libmachine: STDOUT: 
	I0818 12:15:05.697913    2900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:15:05.697944    2900 fix.go:56] duration metric: took 14.466ms for fixHost
	I0818 12:15:05.697949    2900 start.go:83] releasing machines lock for "ha-108000", held for 14.481083ms
	W0818 12:15:05.697956    2900 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:15:05.697987    2900 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:15:05.697991    2900 start.go:729] Will try again in 5 seconds ...
	I0818 12:15:10.699290    2900 start.go:360] acquireMachinesLock for ha-108000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:15:10.699651    2900 start.go:364] duration metric: took 276.25µs to acquireMachinesLock for "ha-108000"
	I0818 12:15:10.699760    2900 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:15:10.699780    2900 fix.go:54] fixHost starting: 
	I0818 12:15:10.700425    2900 fix.go:112] recreateIfNeeded on ha-108000: state=Stopped err=<nil>
	W0818 12:15:10.700452    2900 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:15:10.706820    2900 out.go:177] * Restarting existing qemu2 VM for "ha-108000" ...
	I0818 12:15:10.710786    2900 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:15:10.711004    2900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:4c:0c:59:1d:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/disk.qcow2
	I0818 12:15:10.719836    2900 main.go:141] libmachine: STDOUT: 
	I0818 12:15:10.719894    2900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:15:10.719966    2900 fix.go:56] duration metric: took 20.182709ms for fixHost
	I0818 12:15:10.719982    2900 start.go:83] releasing machines lock for "ha-108000", held for 20.308833ms
	W0818 12:15:10.720118    2900 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:15:10.727805    2900 out.go:201] 
	W0818 12:15:10.731784    2900 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:15:10.731807    2900 out.go:270] * 
	* 
	W0818 12:15:10.734085    2900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:15:10.741832    2900 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-108000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-108000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000: exit status 7 (33.164875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-108000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.134833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-108000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-108000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:15:10.881794    2912 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:15:10.882146    2912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:10.882152    2912 out.go:358] Setting ErrFile to fd 2...
	I0818 12:15:10.882154    2912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:10.882365    2912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:15:10.882752    2912 mustload.go:65] Loading cluster: ha-108000
	I0818 12:15:10.882972    2912 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0818 12:15:10.883260    2912 out.go:270] ! The control-plane node ha-108000 host is not running (will try others): state=Stopped
	! The control-plane node ha-108000 host is not running (will try others): state=Stopped
	W0818 12:15:10.883370    2912 out.go:270] ! The control-plane node ha-108000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-108000-m02 host is not running (will try others): state=Stopped
	I0818 12:15:10.887010    2912 out.go:177] * The control-plane node ha-108000-m03 host is not running: state=Stopped
	I0818 12:15:10.890013    2912 out.go:177]   To start a cluster, run: "minikube start -p ha-108000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-108000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr: exit status 7 (30.227ms)

                                                
                                                
-- stdout --
	ha-108000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-108000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-108000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-108000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:15:10.921996    2914 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:15:10.922158    2914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:10.922162    2914 out.go:358] Setting ErrFile to fd 2...
	I0818 12:15:10.922164    2914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:10.922302    2914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:15:10.922423    2914 out.go:352] Setting JSON to false
	I0818 12:15:10.922438    2914 mustload.go:65] Loading cluster: ha-108000
	I0818 12:15:10.922494    2914 notify.go:220] Checking for updates...
	I0818 12:15:10.922672    2914 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:15:10.922678    2914 status.go:255] checking status of ha-108000 ...
	I0818 12:15:10.922905    2914 status.go:330] ha-108000 host status = "Stopped" (err=<nil>)
	I0818 12:15:10.922909    2914 status.go:343] host is not running, skipping remaining checks
	I0818 12:15:10.922911    2914 status.go:257] ha-108000 status: &{Name:ha-108000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:15:10.922924    2914 status.go:255] checking status of ha-108000-m02 ...
	I0818 12:15:10.923018    2914 status.go:330] ha-108000-m02 host status = "Stopped" (err=<nil>)
	I0818 12:15:10.923021    2914 status.go:343] host is not running, skipping remaining checks
	I0818 12:15:10.923023    2914 status.go:257] ha-108000-m02 status: &{Name:ha-108000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:15:10.923027    2914 status.go:255] checking status of ha-108000-m03 ...
	I0818 12:15:10.923111    2914 status.go:330] ha-108000-m03 host status = "Stopped" (err=<nil>)
	I0818 12:15:10.923113    2914 status.go:343] host is not running, skipping remaining checks
	I0818 12:15:10.923115    2914 status.go:257] ha-108000-m03 status: &{Name:ha-108000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:15:10.923118    2914 status.go:255] checking status of ha-108000-m04 ...
	I0818 12:15:10.923216    2914 status.go:330] ha-108000-m04 host status = "Stopped" (err=<nil>)
	I0818 12:15:10.923219    2914 status.go:343] host is not running, skipping remaining checks
	I0818 12:15:10.923221    2914 status.go:257] ha-108000-m04 status: &{Name:ha-108000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000: exit status 7 (29.153458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-108000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-108000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-108000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-108000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000: exit status 7 (29.350708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (226.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 stop -v=7 --alsologtostderr
E0818 12:16:08.672883    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:18:06.681413    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-108000 stop -v=7 --alsologtostderr: signal: killed (3m46.715624917s)

                                                
                                                
-- stdout --
	* Stopping node "ha-108000-m04"  ...
	* Stopping node "ha-108000-m03"  ...
	* Stopping node "ha-108000-m02"  ...
	* Stopping node "ha-108000"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:15:11.058952    2923 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:15:11.059083    2923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:11.059086    2923 out.go:358] Setting ErrFile to fd 2...
	I0818 12:15:11.059089    2923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:15:11.059214    2923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:15:11.059420    2923 out.go:352] Setting JSON to false
	I0818 12:15:11.059530    2923 mustload.go:65] Loading cluster: ha-108000
	I0818 12:15:11.059757    2923 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:15:11.059807    2923 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/ha-108000/config.json ...
	I0818 12:15:11.060067    2923 mustload.go:65] Loading cluster: ha-108000
	I0818 12:15:11.060144    2923 config.go:182] Loaded profile config "ha-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:15:11.060161    2923 stop.go:39] StopHost: ha-108000-m04
	I0818 12:15:11.063972    2923 out.go:177] * Stopping node "ha-108000-m04"  ...
	I0818 12:15:11.069964    2923 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 12:15:11.069998    2923 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 12:15:11.070013    2923 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m04/id_rsa Username:docker}
	W0818 12:16:26.059603    2923 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0818 12:16:26.059953    2923 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0818 12:16:26.060135    2923 main.go:141] libmachine: Stopping "ha-108000-m04"...
	I0818 12:16:26.060272    2923 stop.go:66] stop err: Machine "ha-108000-m04" is already stopped.
	I0818 12:16:26.060300    2923 stop.go:69] host is already stopped
	I0818 12:16:26.060326    2923 stop.go:39] StopHost: ha-108000-m03
	I0818 12:16:26.068550    2923 out.go:177] * Stopping node "ha-108000-m03"  ...
	I0818 12:16:26.072493    2923 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 12:16:26.072730    2923 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 12:16:26.072760    2923 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m03/id_rsa Username:docker}
	W0818 12:17:41.069689    2923 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0818 12:17:41.069898    2923 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0818 12:17:41.070064    2923 main.go:141] libmachine: Stopping "ha-108000-m03"...
	I0818 12:17:41.070203    2923 stop.go:66] stop err: Machine "ha-108000-m03" is already stopped.
	I0818 12:17:41.070231    2923 stop.go:69] host is already stopped
	I0818 12:17:41.070259    2923 stop.go:39] StopHost: ha-108000-m02
	I0818 12:17:41.080495    2923 out.go:177] * Stopping node "ha-108000-m02"  ...
	I0818 12:17:41.091109    2923 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 12:17:41.091249    2923 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 12:17:41.091282    2923 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000-m02/id_rsa Username:docker}
	W0818 12:18:56.093119    2923 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.6:22: connect: operation timed out
	W0818 12:18:56.093317    2923 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.6:22: connect: operation timed out
	I0818 12:18:56.093376    2923 main.go:141] libmachine: Stopping "ha-108000-m02"...
	I0818 12:18:56.093527    2923 stop.go:66] stop err: Machine "ha-108000-m02" is already stopped.
	I0818 12:18:56.093552    2923 stop.go:69] host is already stopped
	I0818 12:18:56.093573    2923 stop.go:39] StopHost: ha-108000
	I0818 12:18:56.102867    2923 out.go:177] * Stopping node "ha-108000"  ...
	I0818 12:18:56.105909    2923 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 12:18:56.106077    2923 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 12:18:56.106112    2923 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/ha-108000/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-108000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr: context deadline exceeded (2.042µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-108000 -n ha-108000: exit status 7 (71.48575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (226.79s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-254000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-254000 --driver=qemu2 : exit status 80 (10.176161375s)

                                                
                                                
-- stdout --
	* [image-254000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-254000" primary control-plane node in "image-254000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-254000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-254000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-254000 -n image-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-254000 -n image-254000: exit status 7 (67.276291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-220000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0818 12:19:11.750953    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-220000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.776699875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2b1a251-6d93-4dac-aa62-6d56849f863e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-220000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9817164-d457-4af6-bcc5-9c93ca3fae10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"ef18137b-8605-4051-a873-4dfe45a2d5bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig"}}
	{"specversion":"1.0","id":"b5c712c6-7108-488f-8b94-f51180297b52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"66f5331e-6515-43d2-994c-dbbc5527bb5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3dc19b47-7135-48d7-b236-cf3f579df30a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube"}}
	{"specversion":"1.0","id":"85ea42d2-de03-48ee-9820-bd3d508f35e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ef7bdfa6-34ea-4f53-935d-4d48d35330ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b66d0962-8659-4378-bd6b-29941f8d5f07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a6c5a7c8-817c-4c98-b7a9-a075e772adcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-220000\" primary control-plane node in \"json-output-220000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e75d919-a593-4639-ad31-c39a53691590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0cffcb23-bf17-4bb5-bcd3-027293a9b57f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-220000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"3421c6ca-00ba-437f-8dcf-d155e836f395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"e9cf95b9-8ae7-44e3-893f-d9bf8d5b1e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"1a39d770-33c0-462e-b165-969a33d3c516","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-220000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"36e7a024-b256-4050-a3dc-6f946c8bdfc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a99f96d3-eae0-42d7-823a-fe84fad9d7d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-220000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-220000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-220000 --output=json --user=testUser: exit status 83 (77.841ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"babab387-e42f-4428-8327-647eab46a340","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-220000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"ee2308b3-e1da-4fcd-84f4-e9356786e848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-220000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-220000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-220000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-220000 --output=json --user=testUser: exit status 83 (43.870042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-220000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-220000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-220000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-220000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-567000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-567000 --driver=qemu2 : exit status 80 (9.906101709s)

                                                
                                                
-- stdout --
	* [first-567000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-567000" primary control-plane node in "first-567000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-567000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-567000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-567000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-18 12:19:31.87327 -0700 PDT m=+2535.630925209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-569000 -n second-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-569000 -n second-569000: exit status 85 (77.222125ms)

                                                
                                                
-- stdout --
	* Profile "second-569000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-569000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-569000" host is not running, skipping log retrieval (state="* Profile \"second-569000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-569000\"")
helpers_test.go:175: Cleaning up "second-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-569000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-18 12:19:32.062817 -0700 PDT m=+2535.820473542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-567000 -n first-567000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-567000 -n first-567000: exit status 7 (29.489292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-567000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-567000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-567000
--- FAIL: TestMinikubeProfile (10.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-920000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-920000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.867223792s)

                                                
                                                
-- stdout --
	* [mount-start-1-920000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-920000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-920000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-920000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-920000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-920000 -n mount-start-1-920000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-920000 -n mount-start-1-920000: exit status 7 (67.789292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-920000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-571000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-571000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.933096667s)

                                                
                                                
-- stdout --
	* [multinode-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-571000" primary control-plane node in "multinode-571000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:19:42.322556    3091 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:19:42.322673    3091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:19:42.322676    3091 out.go:358] Setting ErrFile to fd 2...
	I0818 12:19:42.322678    3091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:19:42.322831    3091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:19:42.323862    3091 out.go:352] Setting JSON to false
	I0818 12:19:42.339958    3091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2952,"bootTime":1724005830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:19:42.340031    3091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:19:42.346763    3091 out.go:177] * [multinode-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:19:42.355696    3091 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:19:42.355757    3091 notify.go:220] Checking for updates...
	I0818 12:19:42.364600    3091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:19:42.367746    3091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:19:42.370593    3091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:19:42.373704    3091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:19:42.376714    3091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:19:42.379798    3091 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:19:42.383589    3091 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:19:42.390636    3091 start.go:297] selected driver: qemu2
	I0818 12:19:42.390645    3091 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:19:42.390652    3091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:19:42.392881    3091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:19:42.395682    3091 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:19:42.398741    3091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:19:42.398792    3091 cni.go:84] Creating CNI manager for ""
	I0818 12:19:42.398799    3091 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0818 12:19:42.398808    3091 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 12:19:42.398839    3091 start.go:340] cluster config:
	{Name:multinode-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:19:42.402510    3091 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:19:42.409632    3091 out.go:177] * Starting "multinode-571000" primary control-plane node in "multinode-571000" cluster
	I0818 12:19:42.413638    3091 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:19:42.413653    3091 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:19:42.413661    3091 cache.go:56] Caching tarball of preloaded images
	I0818 12:19:42.413725    3091 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:19:42.413731    3091 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:19:42.413937    3091 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/multinode-571000/config.json ...
	I0818 12:19:42.413949    3091 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/multinode-571000/config.json: {Name:mk649ea4cd7cacf47900effc52f639965294c48b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:19:42.414179    3091 start.go:360] acquireMachinesLock for multinode-571000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:19:42.414216    3091 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "multinode-571000"
	I0818 12:19:42.414230    3091 start.go:93] Provisioning new machine with config: &{Name:multinode-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:19:42.414258    3091 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:19:42.422637    3091 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:19:42.440844    3091 start.go:159] libmachine.API.Create for "multinode-571000" (driver="qemu2")
	I0818 12:19:42.440869    3091 client.go:168] LocalClient.Create starting
	I0818 12:19:42.440938    3091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:19:42.440969    3091 main.go:141] libmachine: Decoding PEM data...
	I0818 12:19:42.440978    3091 main.go:141] libmachine: Parsing certificate...
	I0818 12:19:42.441015    3091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:19:42.441039    3091 main.go:141] libmachine: Decoding PEM data...
	I0818 12:19:42.441049    3091 main.go:141] libmachine: Parsing certificate...
	I0818 12:19:42.441409    3091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:19:42.592856    3091 main.go:141] libmachine: Creating SSH key...
	I0818 12:19:42.765585    3091 main.go:141] libmachine: Creating Disk image...
	I0818 12:19:42.765591    3091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:19:42.765794    3091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:19:42.775493    3091 main.go:141] libmachine: STDOUT: 
	I0818 12:19:42.775518    3091 main.go:141] libmachine: STDERR: 
	I0818 12:19:42.775567    3091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2 +20000M
	I0818 12:19:42.783707    3091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:19:42.783721    3091 main.go:141] libmachine: STDERR: 
	I0818 12:19:42.783743    3091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:19:42.783750    3091 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:19:42.783761    3091 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:19:42.783786    3091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:05:53:1c:5d:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:19:42.785405    3091 main.go:141] libmachine: STDOUT: 
	I0818 12:19:42.785421    3091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:19:42.785439    3091 client.go:171] duration metric: took 344.56875ms to LocalClient.Create
	I0818 12:19:44.787612    3091 start.go:128] duration metric: took 2.373354458s to createHost
	I0818 12:19:44.787694    3091 start.go:83] releasing machines lock for "multinode-571000", held for 2.373489s
	W0818 12:19:44.787798    3091 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:19:44.794983    3091 out.go:177] * Deleting "multinode-571000" in qemu2 ...
	W0818 12:19:44.826966    3091 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:19:44.826984    3091 start.go:729] Will try again in 5 seconds ...
	I0818 12:19:49.829221    3091 start.go:360] acquireMachinesLock for multinode-571000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:19:49.829649    3091 start.go:364] duration metric: took 331.708µs to acquireMachinesLock for "multinode-571000"
	I0818 12:19:49.829790    3091 start.go:93] Provisioning new machine with config: &{Name:multinode-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:19:49.830091    3091 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:19:49.839764    3091 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:19:49.891405    3091 start.go:159] libmachine.API.Create for "multinode-571000" (driver="qemu2")
	I0818 12:19:49.891471    3091 client.go:168] LocalClient.Create starting
	I0818 12:19:49.891581    3091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:19:49.891653    3091 main.go:141] libmachine: Decoding PEM data...
	I0818 12:19:49.891669    3091 main.go:141] libmachine: Parsing certificate...
	I0818 12:19:49.891735    3091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:19:49.891780    3091 main.go:141] libmachine: Decoding PEM data...
	I0818 12:19:49.891793    3091 main.go:141] libmachine: Parsing certificate...
	I0818 12:19:49.892409    3091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:19:50.054559    3091 main.go:141] libmachine: Creating SSH key...
	I0818 12:19:50.159807    3091 main.go:141] libmachine: Creating Disk image...
	I0818 12:19:50.159812    3091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:19:50.159997    3091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:19:50.169170    3091 main.go:141] libmachine: STDOUT: 
	I0818 12:19:50.169190    3091 main.go:141] libmachine: STDERR: 
	I0818 12:19:50.169230    3091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2 +20000M
	I0818 12:19:50.177098    3091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:19:50.177121    3091 main.go:141] libmachine: STDERR: 
	I0818 12:19:50.177132    3091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:19:50.177139    3091 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:19:50.177146    3091 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:19:50.177177    3091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c1:15:ab:93:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:19:50.178748    3091 main.go:141] libmachine: STDOUT: 
	I0818 12:19:50.178768    3091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:19:50.178789    3091 client.go:171] duration metric: took 287.31625ms to LocalClient.Create
	I0818 12:19:52.180944    3091 start.go:128] duration metric: took 2.350843417s to createHost
	I0818 12:19:52.181012    3091 start.go:83] releasing machines lock for "multinode-571000", held for 2.351359667s
	W0818 12:19:52.181336    3091 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:19:52.196001    3091 out.go:201] 
	W0818 12:19:52.199961    3091 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:19:52.199986    3091 out.go:270] * 
	* 
	W0818 12:19:52.202623    3091 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:19:52.215900    3091 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-571000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (68.734584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (100.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.102583ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-571000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- rollout status deployment/busybox: exit status 1 (57.285458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.798167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.750875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.039417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.268708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.408209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.773334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.351833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.879208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.957083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.59325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0818 12:21:08.654459    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.407125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.641167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.649041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.761208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.14025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.736916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (100.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-571000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.11125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (30.195875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-571000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-571000 -v 3 --alsologtostderr: exit status 83 (40.497167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-571000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-571000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:32.858148    3177 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:32.858316    3177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:32.858319    3177 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:32.858322    3177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:32.858455    3177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:32.858695    3177 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:32.858878    3177 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:32.862779    3177 out.go:177] * The control-plane node multinode-571000 host is not running: state=Stopped
	I0818 12:21:32.865603    3177 out.go:177]   To start a cluster, run: "minikube start -p multinode-571000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-571000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.918792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-571000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-571000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.742084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-571000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-571000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-571000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (30.157458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-571000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-571000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-571000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-571000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.896167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status --output json --alsologtostderr: exit status 7 (29.753084ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-571000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:33.066828    3189 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:33.066994    3189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.066998    3189 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:33.067000    3189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.067145    3189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:33.067254    3189 out.go:352] Setting JSON to true
	I0818 12:21:33.067267    3189 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:33.067332    3189 notify.go:220] Checking for updates...
	I0818 12:21:33.067469    3189 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:33.067476    3189 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:33.067720    3189 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:33.067723    3189 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:33.067726    3189 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-571000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.453542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 node stop m03: exit status 85 (45.425334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-571000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status: exit status 7 (29.517917ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr: exit status 7 (30.124833ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:33.202228    3197 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:33.202358    3197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.202361    3197 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:33.202364    3197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.202480    3197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:33.202605    3197 out.go:352] Setting JSON to false
	I0818 12:21:33.202616    3197 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:33.202668    3197 notify.go:220] Checking for updates...
	I0818 12:21:33.202834    3197 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:33.202841    3197 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:33.203052    3197 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:33.203055    3197 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:33.203057    3197 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr": multinode-571000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.131042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.079125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:33.261059    3201 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:33.261314    3201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.261318    3201 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:33.261320    3201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.261451    3201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:33.261682    3201 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:33.261876    3201 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:33.266657    3201 out.go:201] 
	W0818 12:21:33.269573    3201 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0818 12:21:33.269578    3201 out.go:270] * 
	* 
	W0818 12:21:33.271238    3201 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:21:33.274602    3201 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0818 12:21:33.261059    3201 out.go:345] Setting OutFile to fd 1 ...
I0818 12:21:33.261314    3201 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 12:21:33.261318    3201 out.go:358] Setting ErrFile to fd 2...
I0818 12:21:33.261320    3201 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 12:21:33.261451    3201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
I0818 12:21:33.261682    3201 mustload.go:65] Loading cluster: multinode-571000
I0818 12:21:33.261876    3201 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 12:21:33.266657    3201 out.go:201] 
W0818 12:21:33.269573    3201 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0818 12:21:33.269578    3201 out.go:270] * 
* 
W0818 12:21:33.271238    3201 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0818 12:21:33.274602    3201 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-571000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (30.026417ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:33.307980    3203 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:33.308128    3203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.308132    3203 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:33.308134    3203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:33.308306    3203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:33.308423    3203 out.go:352] Setting JSON to false
	I0818 12:21:33.308434    3203 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:33.308481    3203 notify.go:220] Checking for updates...
	I0818 12:21:33.308623    3203 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:33.308635    3203 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:33.308839    3203 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:33.308842    3203 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:33.308845    3203 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (75.085541ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:34.315216    3205 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:34.315437    3205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:34.315441    3205 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:34.315444    3205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:34.315615    3205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:34.315765    3205 out.go:352] Setting JSON to false
	I0818 12:21:34.315779    3205 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:34.315824    3205 notify.go:220] Checking for updates...
	I0818 12:21:34.316033    3205 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:34.316042    3205 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:34.316321    3205 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:34.316326    3205 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:34.316329    3205 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (73.370333ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:36.290988    3207 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:36.291202    3207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:36.291206    3207 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:36.291209    3207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:36.291389    3207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:36.291556    3207 out.go:352] Setting JSON to false
	I0818 12:21:36.291569    3207 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:36.291604    3207 notify.go:220] Checking for updates...
	I0818 12:21:36.291856    3207 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:36.291865    3207 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:36.292164    3207 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:36.292169    3207 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:36.292172    3207 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (72.699625ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:38.371641    3209 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:38.371883    3209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:38.371889    3209 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:38.371893    3209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:38.372074    3209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:38.372243    3209 out.go:352] Setting JSON to false
	I0818 12:21:38.372262    3209 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:38.372285    3209 notify.go:220] Checking for updates...
	I0818 12:21:38.372538    3209 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:38.372549    3209 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:38.372817    3209 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:38.372822    3209 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:38.372826    3209 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (74.7865ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:40.701472    3211 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:40.701673    3211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:40.701678    3211 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:40.701681    3211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:40.701859    3211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:40.702012    3211 out.go:352] Setting JSON to false
	I0818 12:21:40.702026    3211 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:40.702060    3211 notify.go:220] Checking for updates...
	I0818 12:21:40.702275    3211 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:40.702284    3211 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:40.702558    3211 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:40.702563    3211 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:40.702566    3211 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (71.80975ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:44.389007    3213 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:44.389213    3213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:44.389221    3213 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:44.389225    3213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:44.389421    3213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:44.389599    3213 out.go:352] Setting JSON to false
	I0818 12:21:44.389615    3213 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:44.389664    3213 notify.go:220] Checking for updates...
	I0818 12:21:44.389916    3213 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:44.389930    3213 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:44.390254    3213 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:44.390259    3213 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:44.390262    3213 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (74.9495ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:21:53.312313    3215 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:21:53.312828    3215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:53.312835    3215 out.go:358] Setting ErrFile to fd 2...
	I0818 12:21:53.312838    3215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:21:53.313118    3215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:21:53.313505    3215 out.go:352] Setting JSON to false
	I0818 12:21:53.313542    3215 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:21:53.313569    3215 notify.go:220] Checking for updates...
	I0818 12:21:53.314121    3215 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:21:53.314138    3215 status.go:255] checking status of multinode-571000 ...
	I0818 12:21:53.314407    3215 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:21:53.314413    3215 status.go:343] host is not running, skipping remaining checks
	I0818 12:21:53.314416    3215 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (72.801ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:22:06.431194    3217 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:22:06.431416    3217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:06.431423    3217 out.go:358] Setting ErrFile to fd 2...
	I0818 12:22:06.431426    3217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:06.431644    3217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:22:06.431845    3217 out.go:352] Setting JSON to false
	I0818 12:22:06.431862    3217 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:22:06.431900    3217 notify.go:220] Checking for updates...
	I0818 12:22:06.432147    3217 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:22:06.432156    3217 status.go:255] checking status of multinode-571000 ...
	I0818 12:22:06.432455    3217 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:22:06.432460    3217 status.go:343] host is not running, skipping remaining checks
	I0818 12:22:06.432463    3217 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr: exit status 7 (72.405458ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:22:24.863127    3219 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:22:24.863340    3219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:24.863344    3219 out.go:358] Setting ErrFile to fd 2...
	I0818 12:22:24.863348    3219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:24.863503    3219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:22:24.863663    3219 out.go:352] Setting JSON to false
	I0818 12:22:24.863677    3219 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:22:24.863721    3219 notify.go:220] Checking for updates...
	I0818 12:22:24.863959    3219 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:22:24.863968    3219 status.go:255] checking status of multinode-571000 ...
	I0818 12:22:24.864242    3219 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:22:24.864247    3219 status.go:343] host is not running, skipping remaining checks
	I0818 12:22:24.864250    3219 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-571000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (33.861458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-571000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-571000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-571000: (3.565564041s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-571000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-571000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.220132417s)

                                                
                                                
-- stdout --
	* [multinode-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-571000" primary control-plane node in "multinode-571000" cluster
	* Restarting existing qemu2 VM for "multinode-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:22:28.558939    3245 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:22:28.559107    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:28.559112    3245 out.go:358] Setting ErrFile to fd 2...
	I0818 12:22:28.559115    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:28.559284    3245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:22:28.560587    3245 out.go:352] Setting JSON to false
	I0818 12:22:28.580540    3245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3118,"bootTime":1724005830,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:22:28.580616    3245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:22:28.585617    3245 out.go:177] * [multinode-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:22:28.592435    3245 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:22:28.592505    3245 notify.go:220] Checking for updates...
	I0818 12:22:28.599513    3245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:22:28.600785    3245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:22:28.603554    3245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:22:28.606533    3245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:22:28.609569    3245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:22:28.612811    3245 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:22:28.612867    3245 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:22:28.617527    3245 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:22:28.624528    3245 start.go:297] selected driver: qemu2
	I0818 12:22:28.624538    3245 start.go:901] validating driver "qemu2" against &{Name:multinode-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:22:28.624631    3245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:22:28.627015    3245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:22:28.627041    3245 cni.go:84] Creating CNI manager for ""
	I0818 12:22:28.627047    3245 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0818 12:22:28.627116    3245 start.go:340] cluster config:
	{Name:multinode-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-571000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:22:28.630859    3245 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:22:28.637488    3245 out.go:177] * Starting "multinode-571000" primary control-plane node in "multinode-571000" cluster
	I0818 12:22:28.641562    3245 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:22:28.641583    3245 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:22:28.641591    3245 cache.go:56] Caching tarball of preloaded images
	I0818 12:22:28.641662    3245 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:22:28.641668    3245 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:22:28.641731    3245 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/multinode-571000/config.json ...
	I0818 12:22:28.642149    3245 start.go:360] acquireMachinesLock for multinode-571000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:22:28.642186    3245 start.go:364] duration metric: took 30.709µs to acquireMachinesLock for "multinode-571000"
	I0818 12:22:28.642196    3245 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:22:28.642203    3245 fix.go:54] fixHost starting: 
	I0818 12:22:28.642334    3245 fix.go:112] recreateIfNeeded on multinode-571000: state=Stopped err=<nil>
	W0818 12:22:28.642342    3245 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:22:28.649483    3245 out.go:177] * Restarting existing qemu2 VM for "multinode-571000" ...
	I0818 12:22:28.653549    3245 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:22:28.653596    3245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c1:15:ab:93:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:22:28.655636    3245 main.go:141] libmachine: STDOUT: 
	I0818 12:22:28.655669    3245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:22:28.655702    3245 fix.go:56] duration metric: took 13.500625ms for fixHost
	I0818 12:22:28.655708    3245 start.go:83] releasing machines lock for "multinode-571000", held for 13.517208ms
	W0818 12:22:28.655717    3245 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:22:28.655751    3245 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:22:28.655755    3245 start.go:729] Will try again in 5 seconds ...
	I0818 12:22:33.657838    3245 start.go:360] acquireMachinesLock for multinode-571000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:22:33.658168    3245 start.go:364] duration metric: took 272.25µs to acquireMachinesLock for "multinode-571000"
	I0818 12:22:33.658296    3245 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:22:33.658315    3245 fix.go:54] fixHost starting: 
	I0818 12:22:33.658972    3245 fix.go:112] recreateIfNeeded on multinode-571000: state=Stopped err=<nil>
	W0818 12:22:33.658999    3245 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:22:33.663406    3245 out.go:177] * Restarting existing qemu2 VM for "multinode-571000" ...
	I0818 12:22:33.671352    3245 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:22:33.671565    3245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c1:15:ab:93:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:22:33.680365    3245 main.go:141] libmachine: STDOUT: 
	I0818 12:22:33.680433    3245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:22:33.680500    3245 fix.go:56] duration metric: took 22.183542ms for fixHost
	I0818 12:22:33.680520    3245 start.go:83] releasing machines lock for "multinode-571000", held for 22.328708ms
	W0818 12:22:33.680672    3245 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:22:33.688447    3245 out.go:201] 
	W0818 12:22:33.692291    3245 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:22:33.692314    3245 out.go:270] * 
	* 
	W0818 12:22:33.694940    3245 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:22:33.703291    3245 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-571000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-571000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (31.927875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 node delete m03: exit status 83 (39.39025ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-571000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-571000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-571000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr: exit status 7 (29.439ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:22:33.885450    3259 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:22:33.885586    3259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:33.885589    3259 out.go:358] Setting ErrFile to fd 2...
	I0818 12:22:33.885592    3259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:33.885724    3259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:22:33.885839    3259 out.go:352] Setting JSON to false
	I0818 12:22:33.885850    3259 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:22:33.885902    3259 notify.go:220] Checking for updates...
	I0818 12:22:33.886056    3259 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:22:33.886063    3259 status.go:255] checking status of multinode-571000 ...
	I0818 12:22:33.886259    3259 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:22:33.886263    3259 status.go:343] host is not running, skipping remaining checks
	I0818 12:22:33.886265    3259 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.754125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-571000 stop: (2.997959958s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status: exit status 7 (60.462958ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr: exit status 7 (32.325625ms)

                                                
                                                
-- stdout --
	multinode-571000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:22:37.006542    3283 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:22:37.006713    3283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:37.006716    3283 out.go:358] Setting ErrFile to fd 2...
	I0818 12:22:37.006719    3283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:37.006852    3283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:22:37.006980    3283 out.go:352] Setting JSON to false
	I0818 12:22:37.006993    3283 mustload.go:65] Loading cluster: multinode-571000
	I0818 12:22:37.007040    3283 notify.go:220] Checking for updates...
	I0818 12:22:37.007185    3283 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:22:37.007192    3283 status.go:255] checking status of multinode-571000 ...
	I0818 12:22:37.007405    3283 status.go:330] multinode-571000 host status = "Stopped" (err=<nil>)
	I0818 12:22:37.007409    3283 status.go:343] host is not running, skipping remaining checks
	I0818 12:22:37.007411    3283 status.go:257] multinode-571000 status: &{Name:multinode-571000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr": multinode-571000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-571000 status --alsologtostderr": multinode-571000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.835833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-571000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-571000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.177169959s)

                                                
                                                
-- stdout --
	* [multinode-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-571000" primary control-plane node in "multinode-571000" cluster
	* Restarting existing qemu2 VM for "multinode-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:22:37.065526    3287 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:22:37.065661    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:37.065665    3287 out.go:358] Setting ErrFile to fd 2...
	I0818 12:22:37.065667    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:22:37.065786    3287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:22:37.066791    3287 out.go:352] Setting JSON to false
	I0818 12:22:37.082987    3287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3127,"bootTime":1724005830,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:22:37.083058    3287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:22:37.087745    3287 out.go:177] * [multinode-571000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:22:37.094679    3287 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:22:37.094714    3287 notify.go:220] Checking for updates...
	I0818 12:22:37.101643    3287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:22:37.104658    3287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:22:37.107687    3287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:22:37.110567    3287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:22:37.113611    3287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:22:37.116900    3287 config.go:182] Loaded profile config "multinode-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:22:37.117181    3287 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:22:37.120625    3287 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:22:37.127629    3287 start.go:297] selected driver: qemu2
	I0818 12:22:37.127637    3287 start.go:901] validating driver "qemu2" against &{Name:multinode-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-571000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:22:37.127695    3287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:22:37.130010    3287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:22:37.130035    3287 cni.go:84] Creating CNI manager for ""
	I0818 12:22:37.130040    3287 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0818 12:22:37.130092    3287 start.go:340] cluster config:
	{Name:multinode-571000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-571000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:22:37.133494    3287 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:22:37.140637    3287 out.go:177] * Starting "multinode-571000" primary control-plane node in "multinode-571000" cluster
	I0818 12:22:37.144693    3287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:22:37.144711    3287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:22:37.144718    3287 cache.go:56] Caching tarball of preloaded images
	I0818 12:22:37.144777    3287 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:22:37.144782    3287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:22:37.144840    3287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/multinode-571000/config.json ...
	I0818 12:22:37.145253    3287 start.go:360] acquireMachinesLock for multinode-571000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:22:37.145278    3287 start.go:364] duration metric: took 19.958µs to acquireMachinesLock for "multinode-571000"
	I0818 12:22:37.145287    3287 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:22:37.145293    3287 fix.go:54] fixHost starting: 
	I0818 12:22:37.145401    3287 fix.go:112] recreateIfNeeded on multinode-571000: state=Stopped err=<nil>
	W0818 12:22:37.145409    3287 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:22:37.153636    3287 out.go:177] * Restarting existing qemu2 VM for "multinode-571000" ...
	I0818 12:22:37.157607    3287 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:22:37.157636    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c1:15:ab:93:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:22:37.159604    3287 main.go:141] libmachine: STDOUT: 
	I0818 12:22:37.159624    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:22:37.159647    3287 fix.go:56] duration metric: took 14.354958ms for fixHost
	I0818 12:22:37.159651    3287 start.go:83] releasing machines lock for "multinode-571000", held for 14.369167ms
	W0818 12:22:37.159660    3287 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:22:37.159685    3287 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:22:37.159689    3287 start.go:729] Will try again in 5 seconds ...
	I0818 12:22:42.161823    3287 start.go:360] acquireMachinesLock for multinode-571000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:22:42.162181    3287 start.go:364] duration metric: took 278.667µs to acquireMachinesLock for "multinode-571000"
	I0818 12:22:42.162285    3287 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:22:42.162303    3287 fix.go:54] fixHost starting: 
	I0818 12:22:42.162984    3287 fix.go:112] recreateIfNeeded on multinode-571000: state=Stopped err=<nil>
	W0818 12:22:42.163009    3287 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:22:42.167475    3287 out.go:177] * Restarting existing qemu2 VM for "multinode-571000" ...
	I0818 12:22:42.171455    3287 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:22:42.171803    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c1:15:ab:93:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/multinode-571000/disk.qcow2
	I0818 12:22:42.181241    3287 main.go:141] libmachine: STDOUT: 
	I0818 12:22:42.181297    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:22:42.181398    3287 fix.go:56] duration metric: took 19.096208ms for fixHost
	I0818 12:22:42.181416    3287 start.go:83] releasing machines lock for "multinode-571000", held for 19.212583ms
	W0818 12:22:42.181601    3287 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:22:42.189424    3287 out.go:201] 
	W0818 12:22:42.192437    3287 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:22:42.192459    3287 out.go:270] * 
	* 
	W0818 12:22:42.194321    3287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:22:42.202448    3287 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-571000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (67.233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-571000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-571000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-571000-m01 --driver=qemu2 : exit status 80 (10.005921792s)

                                                
                                                
-- stdout --
	* [multinode-571000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-571000-m01" primary control-plane node in "multinode-571000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-571000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-571000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-571000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-571000-m02 --driver=qemu2 : exit status 80 (10.008741709s)

                                                
                                                
-- stdout --
	* [multinode-571000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-571000-m02" primary control-plane node in "multinode-571000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-571000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-571000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-571000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-571000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-571000: exit status 83 (87.254709ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-571000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-571000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-571000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-571000 -n multinode-571000: exit status 7 (29.875125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.25s)

                                                
                                    
x
+
TestPreload (10.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-060000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0818 12:23:06.676885    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-060000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.951874333s)

                                                
                                                
-- stdout --
	* [test-preload-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-060000" primary control-plane node in "test-preload-060000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-060000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:23:02.669168    3342 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:23:02.669309    3342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:02.669315    3342 out.go:358] Setting ErrFile to fd 2...
	I0818 12:23:02.669318    3342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:02.669476    3342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:23:02.670524    3342 out.go:352] Setting JSON to false
	I0818 12:23:02.686621    3342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3152,"bootTime":1724005830,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:23:02.686681    3342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:23:02.692980    3342 out.go:177] * [test-preload-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:23:02.701006    3342 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:23:02.701112    3342 notify.go:220] Checking for updates...
	I0818 12:23:02.708944    3342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:23:02.711971    3342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:23:02.714953    3342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:23:02.717988    3342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:23:02.720986    3342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:23:02.724251    3342 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:02.724306    3342 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:23:02.728953    3342 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:23:02.735904    3342 start.go:297] selected driver: qemu2
	I0818 12:23:02.735911    3342 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:23:02.735918    3342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:23:02.738229    3342 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:23:02.740939    3342 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:23:02.744085    3342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:23:02.744126    3342 cni.go:84] Creating CNI manager for ""
	I0818 12:23:02.744135    3342 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:23:02.744140    3342 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:23:02.744176    3342 start.go:340] cluster config:
	{Name:test-preload-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:23:02.747640    3342 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.754935    3342 out.go:177] * Starting "test-preload-060000" primary control-plane node in "test-preload-060000" cluster
	I0818 12:23:02.758967    3342 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0818 12:23:02.759061    3342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/test-preload-060000/config.json ...
	I0818 12:23:02.759088    3342 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/test-preload-060000/config.json: {Name:mk675f576c4c8327e547140198a815502b6c9830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:23:02.759110    3342 cache.go:107] acquiring lock: {Name:mkcaf27b6b9250fba1720aabd7d5e4375ecdab25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759099    3342 cache.go:107] acquiring lock: {Name:mk2420399c090fc2562657fb31407f90325fe729 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759131    3342 cache.go:107] acquiring lock: {Name:mk5b470fd2ba5f36d06194c25a001bea69a8bfc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759102    3342 cache.go:107] acquiring lock: {Name:mkf49e1193e8fcc4f4e86689416fe33b588cd2f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759346    3342 cache.go:107] acquiring lock: {Name:mk1900fb6dcaade0a9029275a4c7eb2993131805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759357    3342 cache.go:107] acquiring lock: {Name:mkede0521086e97f4e4e049dcaf7b5179bd9f687 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759369    3342 cache.go:107] acquiring lock: {Name:mk73c00e6ff0683102e612063e46dbdf03f17b9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759459    3342 start.go:360] acquireMachinesLock for test-preload-060000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:02.759493    3342 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0818 12:23:02.759522    3342 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0818 12:23:02.759550    3342 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0818 12:23:02.759559    3342 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:23:02.759565    3342 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:23:02.759568    3342 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0818 12:23:02.759584    3342 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0818 12:23:02.759548    3342 start.go:364] duration metric: took 77.208µs to acquireMachinesLock for "test-preload-060000"
	I0818 12:23:02.759355    3342 cache.go:107] acquiring lock: {Name:mkdcac00a04f217d026792e0f5c21e39587d6135 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:23:02.759646    3342 start.go:93] Provisioning new machine with config: &{Name:test-preload-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:02.759737    3342 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:02.759811    3342 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:23:02.766830    3342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:23:02.771530    3342 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0818 12:23:02.771757    3342 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0818 12:23:02.771835    3342 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:23:02.771936    3342 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0818 12:23:02.774179    3342 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0818 12:23:02.774205    3342 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:23:02.774220    3342 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0818 12:23:02.774254    3342 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:23:02.784539    3342 start.go:159] libmachine.API.Create for "test-preload-060000" (driver="qemu2")
	I0818 12:23:02.784565    3342 client.go:168] LocalClient.Create starting
	I0818 12:23:02.784624    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:02.784654    3342 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:02.784663    3342 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:02.784698    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:02.784724    3342 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:02.784739    3342 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:02.785089    3342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:02.940722    3342 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:03.000992    3342 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:03.001024    3342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:03.001242    3342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2
	I0818 12:23:03.011274    3342 main.go:141] libmachine: STDOUT: 
	I0818 12:23:03.011306    3342 main.go:141] libmachine: STDERR: 
	I0818 12:23:03.011361    3342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2 +20000M
	I0818 12:23:03.020529    3342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:03.020549    3342 main.go:141] libmachine: STDERR: 
	I0818 12:23:03.020563    3342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2
	I0818 12:23:03.020567    3342 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:03.020587    3342 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:03.020619    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:b2:e5:bd:17:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2
	I0818 12:23:03.022743    3342 main.go:141] libmachine: STDOUT: 
	I0818 12:23:03.022758    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:03.022780    3342 client.go:171] duration metric: took 238.213958ms to LocalClient.Create
	I0818 12:23:03.216454    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0818 12:23:03.237813    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0818 12:23:03.254164    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0818 12:23:03.273732    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0818 12:23:03.297457    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0818 12:23:03.304858    3342 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0818 12:23:03.304892    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0818 12:23:03.326014    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0818 12:23:03.410547    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0818 12:23:03.410593    3342 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 651.462083ms
	I0818 12:23:03.410624    3342 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0818 12:23:03.863799    3342 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0818 12:23:03.863920    3342 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 12:23:04.180095    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0818 12:23:04.180141    3342 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.421039625s
	I0818 12:23:04.180165    3342 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0818 12:23:05.023139    3342 start.go:128] duration metric: took 2.263395125s to createHost
	I0818 12:23:05.023192    3342 start.go:83] releasing machines lock for "test-preload-060000", held for 2.263588875s
	W0818 12:23:05.023257    3342 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:05.033293    3342 out.go:177] * Deleting "test-preload-060000" in qemu2 ...
	W0818 12:23:05.061467    3342 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:05.061491    3342 start.go:729] Will try again in 5 seconds ...
	I0818 12:23:06.029441    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0818 12:23:06.029501    3342 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.2701945s
	I0818 12:23:06.029537    3342 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0818 12:23:06.282981    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0818 12:23:06.283031    3342 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.523718042s
	I0818 12:23:06.283062    3342 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0818 12:23:07.777272    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0818 12:23:07.777315    3342 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.018261333s
	I0818 12:23:07.777342    3342 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0818 12:23:08.402756    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0818 12:23:08.402800    3342 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.643747291s
	I0818 12:23:08.402829    3342 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0818 12:23:09.026709    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0818 12:23:09.026751    3342 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.267430791s
	I0818 12:23:09.026776    3342 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0818 12:23:10.062423    3342 start.go:360] acquireMachinesLock for test-preload-060000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:23:10.062849    3342 start.go:364] duration metric: took 361.5µs to acquireMachinesLock for "test-preload-060000"
	I0818 12:23:10.062986    3342 start.go:93] Provisioning new machine with config: &{Name:test-preload-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:23:10.063231    3342 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:23:10.071934    3342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:23:10.123626    3342 start.go:159] libmachine.API.Create for "test-preload-060000" (driver="qemu2")
	I0818 12:23:10.123680    3342 client.go:168] LocalClient.Create starting
	I0818 12:23:10.123805    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:23:10.123873    3342 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:10.123889    3342 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:10.123947    3342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:23:10.123995    3342 main.go:141] libmachine: Decoding PEM data...
	I0818 12:23:10.124009    3342 main.go:141] libmachine: Parsing certificate...
	I0818 12:23:10.124496    3342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:23:10.288351    3342 main.go:141] libmachine: Creating SSH key...
	I0818 12:23:10.524158    3342 main.go:141] libmachine: Creating Disk image...
	I0818 12:23:10.524169    3342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:23:10.524370    3342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2
	I0818 12:23:10.534161    3342 main.go:141] libmachine: STDOUT: 
	I0818 12:23:10.534198    3342 main.go:141] libmachine: STDERR: 
	I0818 12:23:10.534245    3342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2 +20000M
	I0818 12:23:10.542330    3342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:23:10.542353    3342 main.go:141] libmachine: STDERR: 
	I0818 12:23:10.542365    3342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2
	I0818 12:23:10.542379    3342 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:23:10.542390    3342 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:23:10.542431    3342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:57:92:2e:b7:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/test-preload-060000/disk.qcow2
	I0818 12:23:10.544109    3342 main.go:141] libmachine: STDOUT: 
	I0818 12:23:10.544136    3342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:23:10.544150    3342 client.go:171] duration metric: took 420.468125ms to LocalClient.Create
	I0818 12:23:11.610147    3342 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0818 12:23:11.610226    3342 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.851020125s
	I0818 12:23:11.610258    3342 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0818 12:23:11.610304    3342 cache.go:87] Successfully saved all images to host disk.
	I0818 12:23:12.546355    3342 start.go:128] duration metric: took 2.483091125s to createHost
	I0818 12:23:12.546442    3342 start.go:83] releasing machines lock for "test-preload-060000", held for 2.483588583s
	W0818 12:23:12.546711    3342 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:23:12.564204    3342 out.go:201] 
	W0818 12:23:12.568277    3342 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:23:12.568305    3342 out.go:270] * 
	* 
	W0818 12:23:12.570890    3342 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:23:12.579376    3342 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-060000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-18 12:23:12.596079 -0700 PDT m=+2756.355674334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-060000 -n test-preload-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-060000 -n test-preload-060000: exit status 7 (67.995125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-060000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-060000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-060000
--- FAIL: TestPreload (10.10s)

                                                
                                    
x
+
TestScheduledStopUnix (10.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-084000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-084000 --memory=2048 --driver=qemu2 : exit status 80 (10.011337208s)

                                                
                                                
-- stdout --
	* [scheduled-stop-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-084000" primary control-plane node in "scheduled-stop-084000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-084000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-084000" primary control-plane node in "scheduled-stop-084000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-084000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-18 12:23:22.752116 -0700 PDT m=+2766.511800042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-084000 -n scheduled-stop-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-084000 -n scheduled-stop-084000: exit status 7 (68.829417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-084000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-084000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-084000
--- FAIL: TestScheduledStopUnix (10.16s)

                                                
                                    
x
+
TestSkaffold (12.6s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4206347745 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4206347745 version: (1.055377084s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-162000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-162000 --memory=2600 --driver=qemu2 : exit status 80 (10.114657042s)

                                                
                                                
-- stdout --
	* [skaffold-162000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-162000" primary control-plane node in "skaffold-162000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-162000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-162000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-162000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-162000" primary control-plane node in "skaffold-162000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-162000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-162000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-18 12:23:35.36181 -0700 PDT m=+2779.121605209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-162000 -n skaffold-162000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-162000 -n skaffold-162000: exit status 7 (60.613792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-162000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-162000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-162000
--- FAIL: TestSkaffold (12.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (594.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.577845619 start -p running-upgrade-363000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.577845619 start -p running-upgrade-363000 --memory=2200 --vm-driver=qemu2 : (52.215176167s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-363000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0818 12:26:08.651107    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:26:09.766085    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-363000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m28.680277958s)

                                                
                                                
-- stdout --
	* [running-upgrade-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-363000" primary control-plane node in "running-upgrade-363000" cluster
	* Updating the running qemu2 "running-upgrade-363000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:25:10.064581    3721 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:25:10.064697    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:25:10.064701    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:25:10.064704    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:25:10.064840    3721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:25:10.066054    3721 out.go:352] Setting JSON to false
	I0818 12:25:10.082802    3721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3280,"bootTime":1724005830,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:25:10.082880    3721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:25:10.087280    3721 out.go:177] * [running-upgrade-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:25:10.094226    3721 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:25:10.094296    3721 notify.go:220] Checking for updates...
	I0818 12:25:10.102052    3721 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:25:10.106250    3721 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:25:10.109278    3721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:25:10.110603    3721 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:25:10.113253    3721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:25:10.116508    3721 config.go:182] Loaded profile config "running-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:25:10.120278    3721 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 12:25:10.123215    3721 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:25:10.127202    3721 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:25:10.134257    3721 start.go:297] selected driver: qemu2
	I0818 12:25:10.134266    3721 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50258 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:25:10.134344    3721 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:25:10.136656    3721 cni.go:84] Creating CNI manager for ""
	I0818 12:25:10.136676    3721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:25:10.136709    3721 start.go:340] cluster config:
	{Name:running-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50258 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:25:10.136758    3721 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:25:10.143139    3721 out.go:177] * Starting "running-upgrade-363000" primary control-plane node in "running-upgrade-363000" cluster
	I0818 12:25:10.147268    3721 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0818 12:25:10.147284    3721 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0818 12:25:10.147293    3721 cache.go:56] Caching tarball of preloaded images
	I0818 12:25:10.147363    3721 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:25:10.147368    3721 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0818 12:25:10.147428    3721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/config.json ...
	I0818 12:25:10.147878    3721 start.go:360] acquireMachinesLock for running-upgrade-363000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:25:10.147910    3721 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "running-upgrade-363000"
	I0818 12:25:10.147918    3721 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:25:10.147922    3721 fix.go:54] fixHost starting: 
	I0818 12:25:10.148518    3721 fix.go:112] recreateIfNeeded on running-upgrade-363000: state=Running err=<nil>
	W0818 12:25:10.148528    3721 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:25:10.157232    3721 out.go:177] * Updating the running qemu2 "running-upgrade-363000" VM ...
	I0818 12:25:10.161352    3721 machine.go:93] provisionDockerMachine start ...
	I0818 12:25:10.161405    3721 main.go:141] libmachine: Using SSH client type: native
	I0818 12:25:10.161522    3721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052305a0] 0x105232e00 <nil>  [] 0s} localhost 50226 <nil> <nil>}
	I0818 12:25:10.161527    3721 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:25:10.225312    3721 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-363000
	
	I0818 12:25:10.225326    3721 buildroot.go:166] provisioning hostname "running-upgrade-363000"
	I0818 12:25:10.225369    3721 main.go:141] libmachine: Using SSH client type: native
	I0818 12:25:10.225484    3721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052305a0] 0x105232e00 <nil>  [] 0s} localhost 50226 <nil> <nil>}
	I0818 12:25:10.225492    3721 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-363000 && echo "running-upgrade-363000" | sudo tee /etc/hostname
	I0818 12:25:10.292817    3721 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-363000
	
	I0818 12:25:10.292873    3721 main.go:141] libmachine: Using SSH client type: native
	I0818 12:25:10.293000    3721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052305a0] 0x105232e00 <nil>  [] 0s} localhost 50226 <nil> <nil>}
	I0818 12:25:10.293008    3721 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-363000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-363000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-363000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:25:10.357024    3721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:25:10.357036    3721 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-984/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-984/.minikube}
	I0818 12:25:10.357048    3721 buildroot.go:174] setting up certificates
	I0818 12:25:10.357053    3721 provision.go:84] configureAuth start
	I0818 12:25:10.357059    3721 provision.go:143] copyHostCerts
	I0818 12:25:10.357132    3721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem, removing ...
	I0818 12:25:10.357138    3721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem
	I0818 12:25:10.357269    3721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem (1078 bytes)
	I0818 12:25:10.357442    3721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem, removing ...
	I0818 12:25:10.357445    3721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem
	I0818 12:25:10.357496    3721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem (1123 bytes)
	I0818 12:25:10.357598    3721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem, removing ...
	I0818 12:25:10.357601    3721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem
	I0818 12:25:10.357651    3721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem (1679 bytes)
	I0818 12:25:10.357740    3721 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-363000 san=[127.0.0.1 localhost minikube running-upgrade-363000]
	I0818 12:25:10.460282    3721 provision.go:177] copyRemoteCerts
	I0818 12:25:10.460326    3721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:25:10.460335    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	I0818 12:25:10.492928    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0818 12:25:10.500379    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0818 12:25:10.507037    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:25:10.513983    3721 provision.go:87] duration metric: took 156.921542ms to configureAuth
	I0818 12:25:10.513992    3721 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:25:10.514104    3721 config.go:182] Loaded profile config "running-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:25:10.514141    3721 main.go:141] libmachine: Using SSH client type: native
	I0818 12:25:10.514232    3721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052305a0] 0x105232e00 <nil>  [] 0s} localhost 50226 <nil> <nil>}
	I0818 12:25:10.514236    3721 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:25:10.574416    3721 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:25:10.574425    3721 buildroot.go:70] root file system type: tmpfs
	I0818 12:25:10.574474    3721 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:25:10.574525    3721 main.go:141] libmachine: Using SSH client type: native
	I0818 12:25:10.574653    3721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052305a0] 0x105232e00 <nil>  [] 0s} localhost 50226 <nil> <nil>}
	I0818 12:25:10.574685    3721 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:25:10.638262    3721 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:25:10.638317    3721 main.go:141] libmachine: Using SSH client type: native
	I0818 12:25:10.638430    3721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052305a0] 0x105232e00 <nil>  [] 0s} localhost 50226 <nil> <nil>}
	I0818 12:25:10.638439    3721 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:25:10.700509    3721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:25:10.700520    3721 machine.go:96] duration metric: took 539.166958ms to provisionDockerMachine
	I0818 12:25:10.700526    3721 start.go:293] postStartSetup for "running-upgrade-363000" (driver="qemu2")
	I0818 12:25:10.700532    3721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:25:10.700587    3721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:25:10.700595    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	I0818 12:25:10.740204    3721 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:25:10.741613    3721 info.go:137] Remote host: Buildroot 2021.02.12
	I0818 12:25:10.741621    3721 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-984/.minikube/addons for local assets ...
	I0818 12:25:10.741708    3721 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-984/.minikube/files for local assets ...
	I0818 12:25:10.741832    3721 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem -> 14592.pem in /etc/ssl/certs
	I0818 12:25:10.741962    3721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:25:10.744934    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem --> /etc/ssl/certs/14592.pem (1708 bytes)
	I0818 12:25:10.752097    3721 start.go:296] duration metric: took 51.566625ms for postStartSetup
	I0818 12:25:10.752111    3721 fix.go:56] duration metric: took 604.194375ms for fixHost
	I0818 12:25:10.752150    3721 main.go:141] libmachine: Using SSH client type: native
	I0818 12:25:10.752251    3721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052305a0] 0x105232e00 <nil>  [] 0s} localhost 50226 <nil> <nil>}
	I0818 12:25:10.752256    3721 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:25:10.812902    3721 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724009110.717810471
	
	I0818 12:25:10.812910    3721 fix.go:216] guest clock: 1724009110.717810471
	I0818 12:25:10.812914    3721 fix.go:229] Guest: 2024-08-18 12:25:10.717810471 -0700 PDT Remote: 2024-08-18 12:25:10.752112 -0700 PDT m=+0.707319334 (delta=-34.301529ms)
	I0818 12:25:10.812925    3721 fix.go:200] guest clock delta is within tolerance: -34.301529ms
	I0818 12:25:10.812927    3721 start.go:83] releasing machines lock for "running-upgrade-363000", held for 665.019833ms
	I0818 12:25:10.812987    3721 ssh_runner.go:195] Run: cat /version.json
	I0818 12:25:10.812997    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	I0818 12:25:10.812987    3721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:25:10.813048    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	W0818 12:25:10.813568    3721 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50226: connect: connection refused
	I0818 12:25:10.813593    3721 retry.go:31] will retry after 300.033741ms: dial tcp [::1]:50226: connect: connection refused
	W0818 12:25:11.151850    3721 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0818 12:25:11.151983    3721 ssh_runner.go:195] Run: systemctl --version
	I0818 12:25:11.154206    3721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:25:11.155997    3721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:25:11.156030    3721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0818 12:25:11.159478    3721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0818 12:25:11.163889    3721 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:25:11.163896    3721 start.go:495] detecting cgroup driver to use...
	I0818 12:25:11.163959    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:25:11.169021    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0818 12:25:11.172413    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:25:11.175484    3721 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:25:11.175507    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:25:11.179058    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:25:11.182974    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:25:11.185927    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:25:11.188751    3721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:25:11.192156    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:25:11.195551    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:25:11.198573    3721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:25:11.201372    3721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:25:11.204483    3721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:25:11.207188    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:25:11.295622    3721 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:25:11.306025    3721 start.go:495] detecting cgroup driver to use...
	I0818 12:25:11.306088    3721 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:25:11.312235    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:25:11.317006    3721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:25:11.326556    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:25:11.332235    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:25:11.337293    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:25:11.342861    3721 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:25:11.344185    3721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:25:11.347189    3721 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0818 12:25:11.351650    3721 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:25:11.444973    3721 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:25:11.538558    3721 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:25:11.538611    3721 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:25:11.544348    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:25:11.630311    3721 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:25:16.238836    3721 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.608550667s)
	I0818 12:25:16.238909    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:25:16.243899    3721 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:25:16.251264    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:25:16.255951    3721 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:25:16.327749    3721 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:25:16.410652    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:25:16.492585    3721 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:25:16.498945    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:25:16.503241    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:25:16.589132    3721 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:25:16.633405    3721 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:25:16.633491    3721 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:25:16.635602    3721 start.go:563] Will wait 60s for crictl version
	I0818 12:25:16.635654    3721 ssh_runner.go:195] Run: which crictl
	I0818 12:25:16.637216    3721 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:25:16.648833    3721 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0818 12:25:16.648902    3721 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:25:16.661678    3721 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:25:16.680981    3721 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0818 12:25:16.681042    3721 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0818 12:25:16.682271    3721 kubeadm.go:883] updating cluster {Name:running-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50258 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0818 12:25:16.682314    3721 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0818 12:25:16.682349    3721 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:25:16.693064    3721 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0818 12:25:16.693074    3721 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0818 12:25:16.693123    3721 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 12:25:16.696501    3721 ssh_runner.go:195] Run: which lz4
	I0818 12:25:16.697761    3721 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 12:25:16.698945    3721 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 12:25:16.698954    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0818 12:25:17.629078    3721 docker.go:649] duration metric: took 931.354791ms to copy over tarball
	I0818 12:25:17.629137    3721 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 12:25:19.006999    3721 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.377858625s)
	I0818 12:25:19.007012    3721 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 12:25:19.022331    3721 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 12:25:19.025057    3721 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0818 12:25:19.029595    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:25:19.100881    3721 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:25:20.300155    3721 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.199269125s)
	I0818 12:25:20.300246    3721 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:25:20.318141    3721 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0818 12:25:20.318150    3721 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0818 12:25:20.318154    3721 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 12:25:20.322497    3721 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:25:20.324544    3721 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:25:20.326636    3721 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:25:20.326641    3721 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:25:20.328746    3721 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:25:20.328850    3721 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:25:20.330474    3721 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:25:20.330584    3721 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:25:20.331271    3721 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0818 12:25:20.332653    3721 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:25:20.332991    3721 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:25:20.334490    3721 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:25:20.334513    3721 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0818 12:25:20.334664    3721 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:25:20.335656    3721 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:25:20.336484    3721 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:25:20.723266    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:25:20.735340    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0818 12:25:20.737050    3721 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0818 12:25:20.737074    3721 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:25:20.737103    3721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:25:20.756157    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:25:20.760867    3721 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0818 12:25:20.760891    3721 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0818 12:25:20.760954    3721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0818 12:25:20.772514    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:25:20.775781    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0818 12:25:20.778419    3721 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0818 12:25:20.778437    3721 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:25:20.778482    3721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:25:20.789564    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0818 12:25:20.789686    3721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0818 12:25:20.799763    3721 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0818 12:25:20.799788    3721 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:25:20.799840    3721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:25:20.803899    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0818 12:25:20.803915    3721 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0818 12:25:20.803933    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0818 12:25:20.805334    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0818 12:25:20.806961    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:25:20.815442    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0818 12:25:20.822196    3721 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0818 12:25:20.822331    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:25:20.825759    3721 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0818 12:25:20.825780    3721 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:25:20.825824    3721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0818 12:25:20.840447    3721 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0818 12:25:20.840460    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0818 12:25:20.849665    3721 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0818 12:25:20.849696    3721 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:25:20.849984    3721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:25:20.850106    3721 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0818 12:25:20.850264    3721 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:25:20.850336    3721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:25:20.853555    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0818 12:25:20.853680    3721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0818 12:25:20.895061    3721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0818 12:25:20.895094    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0818 12:25:20.895099    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0818 12:25:20.895121    3721 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0818 12:25:20.895134    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0818 12:25:20.895214    3721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0818 12:25:20.897003    3721 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0818 12:25:20.897023    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0818 12:25:20.989203    3721 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0818 12:25:20.989317    3721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:25:20.989404    3721 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0818 12:25:20.989412    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0818 12:25:21.032575    3721 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0818 12:25:21.032602    3721 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:25:21.032655    3721 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:25:21.130777    3721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0818 12:25:21.160019    3721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 12:25:21.160166    3721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 12:25:21.176812    3721 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0818 12:25:21.176841    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0818 12:25:21.243080    3721 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 12:25:21.243095    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0818 12:25:21.708874    3721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 12:25:21.708896    3721 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0818 12:25:21.708903    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0818 12:25:22.041879    3721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0818 12:25:22.041915    3721 cache_images.go:92] duration metric: took 1.72376975s to LoadCachedImages
	W0818 12:25:22.041964    3721 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0818 12:25:22.041972    3721 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0818 12:25:22.042034    3721 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-363000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:25:22.042110    3721 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:25:22.078267    3721 cni.go:84] Creating CNI manager for ""
	I0818 12:25:22.078280    3721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:25:22.078285    3721 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:25:22.078293    3721 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-363000 NodeName:running-upgrade-363000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:25:22.078353    3721 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-363000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:25:22.078408    3721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0818 12:25:22.082048    3721 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:25:22.082080    3721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 12:25:22.093885    3721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0818 12:25:22.104492    3721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:25:22.112385    3721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0818 12:25:22.126473    3721 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0818 12:25:22.127750    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:25:22.263279    3721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:25:22.268503    3721 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000 for IP: 10.0.2.15
	I0818 12:25:22.268511    3721 certs.go:194] generating shared ca certs ...
	I0818 12:25:22.268519    3721 certs.go:226] acquiring lock for ca certs: {Name:mk3b1337311c50e97f8d40ca44614fc311e1e2eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:25:22.268681    3721 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-984/.minikube/ca.key
	I0818 12:25:22.268731    3721 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.key
	I0818 12:25:22.268740    3721 certs.go:256] generating profile certs ...
	I0818 12:25:22.268796    3721 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/client.key
	I0818 12:25:22.268814    3721 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.key.8473fa40
	I0818 12:25:22.268821    3721 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.crt.8473fa40 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0818 12:25:22.371649    3721 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.crt.8473fa40 ...
	I0818 12:25:22.371657    3721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.crt.8473fa40: {Name:mk59013ac77d1485d1214732f6ce6eb568ece6f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:25:22.371941    3721 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.key.8473fa40 ...
	I0818 12:25:22.371946    3721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.key.8473fa40: {Name:mka098dd01a8425388bf8c3428994a9ebd21811d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:25:22.372080    3721 certs.go:381] copying /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.crt.8473fa40 -> /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.crt
	I0818 12:25:22.372215    3721 certs.go:385] copying /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.key.8473fa40 -> /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.key
	I0818 12:25:22.372365    3721 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/proxy-client.key
	I0818 12:25:22.372506    3721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459.pem (1338 bytes)
	W0818 12:25:22.372536    3721 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459_empty.pem, impossibly tiny 0 bytes
	I0818 12:25:22.372542    3721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:25:22.372563    3721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem (1078 bytes)
	I0818 12:25:22.372586    3721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:25:22.372605    3721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem (1679 bytes)
	I0818 12:25:22.372647    3721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem (1708 bytes)
	I0818 12:25:22.372994    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:25:22.380275    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 12:25:22.388950    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:25:22.397588    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 12:25:22.409333    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:25:22.417119    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 12:25:22.430113    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:25:22.442514    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:25:22.452182    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:25:22.461697    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459.pem --> /usr/share/ca-certificates/1459.pem (1338 bytes)
	I0818 12:25:22.468744    3721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1708 bytes)
	I0818 12:25:22.475351    3721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:25:22.480305    3721 ssh_runner.go:195] Run: openssl version
	I0818 12:25:22.481976    3721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1459.pem && ln -fs /usr/share/ca-certificates/1459.pem /etc/ssl/certs/1459.pem"
	I0818 12:25:22.484970    3721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1459.pem
	I0818 12:25:22.486277    3721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:45 /usr/share/ca-certificates/1459.pem
	I0818 12:25:22.486299    3721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1459.pem
	I0818 12:25:22.488103    3721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1459.pem /etc/ssl/certs/51391683.0"
	I0818 12:25:22.490796    3721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I0818 12:25:22.494053    3721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I0818 12:25:22.495389    3721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:45 /usr/share/ca-certificates/14592.pem
	I0818 12:25:22.495411    3721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I0818 12:25:22.497280    3721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:25:22.499864    3721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:25:22.502864    3721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:25:22.504331    3721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:25:22.504352    3721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:25:22.506162    3721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:25:22.509465    3721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:25:22.510861    3721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:25:22.512614    3721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:25:22.514341    3721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:25:22.516138    3721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:25:22.517903    3721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:25:22.519707    3721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:25:22.521555    3721 kubeadm.go:392] StartCluster: {Name:running-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50258 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:25:22.521626    3721 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:25:22.532144    3721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:25:22.535160    3721 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:25:22.535166    3721 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:25:22.535186    3721 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:25:22.537972    3721 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:25:22.538218    3721 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-363000" does not appear in /Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:25:22.538274    3721 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-984/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-363000" cluster setting kubeconfig missing "running-upgrade-363000" context setting]
	I0818 12:25:22.538405    3721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/kubeconfig: {Name:mked914f07b3885fd33f9b87dfa58e56ae6bca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:25:22.539508    3721 kapi.go:59] client config for running-upgrade-363000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1067e9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:25:22.539833    3721 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:25:22.542562    3721 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-363000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0818 12:25:22.542567    3721 kubeadm.go:1160] stopping kube-system containers ...
	I0818 12:25:22.542605    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:25:22.553976    3721 docker.go:483] Stopping containers: [0219e3a900cb ce6f54feb45f 3650eda70963 763b3bb083cb f70d1c9231bb 514e7d4af075 ea3af85dc5a1 0254183612c2 f7e9dad21f3c d3f7643be217 a2ba10c4562d 6c149833de79 b6665dee0520 c66efbc03ea1 cd0a91fdf9fa bed6a41980eb 80bc6301356c 27fda46fd007]
	I0818 12:25:22.554048    3721 ssh_runner.go:195] Run: docker stop 0219e3a900cb ce6f54feb45f 3650eda70963 763b3bb083cb f70d1c9231bb 514e7d4af075 ea3af85dc5a1 0254183612c2 f7e9dad21f3c d3f7643be217 a2ba10c4562d 6c149833de79 b6665dee0520 c66efbc03ea1 cd0a91fdf9fa bed6a41980eb 80bc6301356c 27fda46fd007
	I0818 12:25:22.804024    3721 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 12:25:22.890994    3721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 12:25:22.894674    3721 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 18 19:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 18 19:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 18 19:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug 18 19:24 /etc/kubernetes/scheduler.conf
	
	I0818 12:25:22.894709    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/admin.conf
	I0818 12:25:22.897389    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:25:22.897425    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 12:25:22.900866    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/kubelet.conf
	I0818 12:25:22.904049    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:25:22.904077    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 12:25:22.907302    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/controller-manager.conf
	I0818 12:25:22.910322    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:25:22.910347    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 12:25:22.914103    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/scheduler.conf
	I0818 12:25:22.918038    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:25:22.918071    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 12:25:22.921392    3721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 12:25:22.925303    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:25:22.953809    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:25:23.410290    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:25:23.615053    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:25:23.642187    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:25:23.664258    3721 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:25:23.664339    3721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:25:24.166476    3721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:25:24.666381    3721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:25:24.671219    3721 api_server.go:72] duration metric: took 1.006972292s to wait for apiserver process to appear ...
	I0818 12:25:24.671230    3721 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:25:24.671245    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:25:29.673346    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:25:29.673401    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:25:34.673868    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:25:34.673958    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:25:39.674838    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:25:39.674859    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:25:44.675546    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:25:44.675615    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:25:49.676830    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:25:49.676882    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:25:54.678262    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:25:54.678392    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:25:59.680488    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:25:59.680576    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:04.683243    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:04.683331    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:09.685988    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:09.686106    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:14.688618    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:14.688702    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:19.691423    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:19.691509    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:24.694056    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:24.694470    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:26:24.736007    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:26:24.736139    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:26:24.755918    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:26:24.756012    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:26:24.770800    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:26:24.770875    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:26:24.782917    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:26:24.782989    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:26:24.793298    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:26:24.793371    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:26:24.805914    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:26:24.805981    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:26:24.815788    3721 logs.go:276] 0 containers: []
	W0818 12:26:24.815802    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:26:24.815858    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:26:24.826726    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:26:24.826741    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:26:24.826746    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:26:24.831165    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:26:24.831175    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:26:24.901698    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:26:24.901711    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:26:24.913804    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:26:24.913816    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:26:24.925317    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:26:24.925328    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:26:24.936512    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:26:24.936525    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:26:24.974905    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:26:24.974913    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:26:24.987000    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:26:24.987010    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:26:25.005569    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:26:25.005581    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:26:25.022873    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:26:25.022882    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:26:25.049295    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:26:25.049302    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:26:25.062966    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:26:25.062975    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:26:25.074381    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:26:25.074392    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:26:25.088584    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:26:25.088594    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:26:25.099966    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:26:25.099980    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:26:25.111766    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:26:25.111775    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:26:25.122865    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:26:25.122876    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:26:27.637140    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:32.639524    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:32.639820    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:26:32.665007    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:26:32.665136    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:26:32.682389    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:26:32.682471    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:26:32.695835    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:26:32.695898    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:26:32.706985    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:26:32.707069    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:26:32.716889    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:26:32.716959    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:26:32.726961    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:26:32.727021    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:26:32.737194    3721 logs.go:276] 0 containers: []
	W0818 12:26:32.737204    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:26:32.737261    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:26:32.747714    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:26:32.747737    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:26:32.747744    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:26:32.759396    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:26:32.759424    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:26:32.771129    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:26:32.771140    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:26:32.783702    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:26:32.783715    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:26:32.788206    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:26:32.788212    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:26:32.806680    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:26:32.806693    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:26:32.817581    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:26:32.817593    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:26:32.829032    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:26:32.829044    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:26:32.865150    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:26:32.865160    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:26:32.889997    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:26:32.890004    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:26:32.925606    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:26:32.925617    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:26:32.937130    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:26:32.937142    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:26:32.947973    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:26:32.947984    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:26:32.973539    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:26:32.973550    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:26:32.987706    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:26:32.987736    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:26:33.001124    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:26:33.001137    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:26:33.016177    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:26:33.016189    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:26:35.530759    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:40.533353    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:40.533617    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:26:40.565098    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:26:40.565234    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:26:40.581481    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:26:40.581564    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:26:40.596448    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:26:40.596510    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:26:40.611247    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:26:40.611318    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:26:40.621522    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:26:40.621589    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:26:40.631992    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:26:40.632055    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:26:40.645519    3721 logs.go:276] 0 containers: []
	W0818 12:26:40.645532    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:26:40.645611    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:26:40.656219    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:26:40.656235    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:26:40.656240    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:26:40.669786    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:26:40.669797    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:26:40.684296    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:26:40.684308    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:26:40.721949    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:26:40.721957    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:26:40.726502    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:26:40.726510    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:26:40.762453    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:26:40.762466    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:26:40.774088    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:26:40.774100    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:26:40.786026    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:26:40.786040    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:26:40.800268    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:26:40.800281    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:26:40.819888    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:26:40.819899    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:26:40.830890    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:26:40.830904    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:26:40.856464    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:26:40.856475    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:26:40.876988    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:26:40.877000    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:26:40.893746    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:26:40.893756    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:26:40.905372    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:26:40.905384    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:26:40.919787    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:26:40.919799    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:26:40.931153    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:26:40.931163    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:26:43.445157    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:48.446722    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:48.447118    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:26:48.482802    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:26:48.482942    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:26:48.511544    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:26:48.511627    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:26:48.524906    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:26:48.524977    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:26:48.536608    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:26:48.536681    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:26:48.551857    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:26:48.551926    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:26:48.562321    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:26:48.562391    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:26:48.572348    3721 logs.go:276] 0 containers: []
	W0818 12:26:48.572357    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:26:48.572409    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:26:48.593670    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:26:48.593689    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:26:48.593695    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:26:48.630636    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:26:48.630643    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:26:48.642127    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:26:48.642136    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:26:48.660158    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:26:48.660166    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:26:48.664412    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:26:48.664420    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:26:48.680398    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:26:48.680411    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:26:48.692199    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:26:48.692212    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:26:48.706931    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:26:48.706941    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:26:48.724287    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:26:48.724298    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:26:48.736393    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:26:48.736405    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:26:48.747941    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:26:48.747951    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:26:48.759221    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:26:48.759231    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:26:48.794157    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:26:48.794167    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:26:48.805367    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:26:48.805378    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:26:48.816909    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:26:48.816920    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:26:48.830041    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:26:48.830052    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:26:48.841980    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:26:48.841992    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:26:51.369358    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:26:56.371949    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:26:56.372385    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:26:56.412207    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:26:56.412344    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:26:56.434498    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:26:56.434605    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:26:56.449334    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:26:56.449411    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:26:56.461904    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:26:56.461967    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:26:56.473425    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:26:56.473501    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:26:56.484392    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:26:56.484458    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:26:56.494414    3721 logs.go:276] 0 containers: []
	W0818 12:26:56.494424    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:26:56.494484    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:26:56.505138    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:26:56.505155    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:26:56.505161    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:26:56.519901    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:26:56.519913    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:26:56.537706    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:26:56.537719    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:26:56.551519    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:26:56.551532    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:26:56.563940    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:26:56.563952    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:26:56.575680    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:26:56.575693    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:26:56.586710    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:26:56.586719    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:26:56.621370    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:26:56.621382    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:26:56.635526    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:26:56.635538    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:26:56.654466    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:26:56.654477    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:26:56.665665    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:26:56.665674    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:26:56.677788    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:26:56.677799    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:26:56.689308    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:26:56.689318    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:26:56.694206    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:26:56.694214    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:26:56.720331    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:26:56.720338    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:26:56.731916    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:26:56.731926    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:26:56.768954    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:26:56.768966    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:26:59.282728    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:04.285622    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:04.286039    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:04.325436    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:04.325561    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:04.347019    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:04.347144    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:04.363746    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:04.363830    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:04.376546    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:04.376620    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:04.387360    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:04.387426    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:04.407355    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:04.407425    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:04.418319    3721 logs.go:276] 0 containers: []
	W0818 12:27:04.418331    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:04.418390    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:04.428875    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:04.428898    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:04.428904    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:04.440940    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:27:04.440954    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:27:04.455100    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:04.455109    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:04.467038    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:04.467048    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:04.478123    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:04.478134    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:04.492893    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:27:04.492905    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:27:04.503910    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:04.503921    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:04.521312    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:04.521324    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:04.547354    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:04.547365    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:04.551524    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:04.551531    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:04.563385    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:27:04.563395    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:27:04.575320    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:04.575332    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:04.587092    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:04.587103    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:04.598547    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:04.598556    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:04.634474    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:04.634482    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:27:04.670614    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:27:04.670625    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:27:04.684751    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:27:04.684762    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:27:07.204225    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:12.206454    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:12.206845    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:12.260032    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:12.260162    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:12.278314    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:12.278394    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:12.292033    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:12.292104    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:12.304908    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:12.305010    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:12.315387    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:12.315458    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:12.326464    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:12.326532    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:12.338080    3721 logs.go:276] 0 containers: []
	W0818 12:27:12.338095    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:12.338148    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:12.348688    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:12.348708    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:12.348713    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:12.360401    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:12.360412    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:12.398372    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:12.398381    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:12.402676    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:12.402686    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:27:12.437068    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:27:12.437082    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:27:12.448569    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:27:12.448582    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:27:12.462027    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:27:12.462038    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:27:12.479709    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:12.479720    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:12.491059    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:12.491072    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:12.502809    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:12.502824    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:12.528929    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:12.528939    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:12.540817    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:12.540830    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:12.554842    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:27:12.554856    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:27:12.568705    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:27:12.568717    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:27:12.580065    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:12.580086    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:12.591601    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:12.591611    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:12.607997    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:12.608008    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:15.121169    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:20.123896    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:20.124341    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:20.173157    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:20.173301    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:20.192801    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:20.192903    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:20.206686    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:20.206752    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:20.221365    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:20.221435    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:20.232186    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:20.232245    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:20.242644    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:20.242711    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:20.253199    3721 logs.go:276] 0 containers: []
	W0818 12:27:20.253210    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:20.253267    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:20.263910    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:20.263928    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:20.263934    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:20.287744    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:20.287751    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:20.299554    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:27:20.299564    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:27:20.314393    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:27:20.314405    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:27:20.328378    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:20.328389    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:20.345823    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:20.345832    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:20.361099    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:20.361109    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:20.372085    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:20.372097    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:27:20.406797    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:27:20.406812    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:27:20.419103    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:20.419115    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:20.431153    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:27:20.431164    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:27:20.448102    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:20.448113    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:20.459617    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:20.459630    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:20.498236    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:20.498243    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:20.502972    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:20.502978    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:20.514172    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:27:20.514183    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:27:20.525382    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:20.525392    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:23.038267    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:28.040620    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:28.041006    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:28.087638    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:28.087756    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:28.106990    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:28.107095    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:28.121501    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:28.121565    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:28.137032    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:28.137114    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:28.147276    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:28.147334    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:28.158095    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:28.158165    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:28.183740    3721 logs.go:276] 0 containers: []
	W0818 12:27:28.183753    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:28.183811    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:28.208447    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:28.208464    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:28.208471    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:28.220493    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:27:28.220504    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:27:28.232022    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:28.232033    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:28.242856    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:28.242867    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:28.268718    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:28.268728    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:28.286509    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:28.286518    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:28.296566    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:27:28.296573    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:27:28.316113    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:27:28.316124    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:27:28.328759    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:28.328770    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:28.345898    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:28.345908    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:28.357606    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:28.357615    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:28.396074    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:27:28.396084    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:27:28.413622    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:28.413633    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:27:28.453567    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:27:28.453578    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:27:28.474638    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:28.474648    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:28.485911    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:28.485921    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:28.497362    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:28.497372    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:31.010458    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:36.012862    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:36.013032    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:36.028281    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:36.028356    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:36.041150    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:36.041225    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:36.052184    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:36.052246    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:36.066598    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:36.066668    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:36.076864    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:36.076932    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:36.091809    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:36.091874    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:36.101718    3721 logs.go:276] 0 containers: []
	W0818 12:27:36.101730    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:36.101784    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:36.112383    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:36.112399    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:36.112405    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:36.116992    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:27:36.116999    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:27:36.130496    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:36.130505    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:36.142250    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:36.142262    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:36.154231    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:27:36.154243    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:27:36.165183    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:27:36.165194    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:27:36.176759    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:36.176770    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:36.212908    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:27:36.212917    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:27:36.227799    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:36.227811    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:36.246534    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:36.246545    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:36.261826    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:36.261836    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:36.286963    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:36.286983    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:36.298389    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:36.298400    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:27:36.337612    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:27:36.337623    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:27:36.355569    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:36.355582    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:36.371142    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:36.371156    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:36.389257    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:36.389267    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:38.903527    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:43.906140    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:43.906309    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:43.919001    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:43.919072    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:43.930247    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:43.930313    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:43.941465    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:43.941532    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:43.953452    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:43.953520    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:43.969065    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:43.969128    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:43.980323    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:43.980390    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:43.990774    3721 logs.go:276] 0 containers: []
	W0818 12:27:43.990786    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:43.990837    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:44.002329    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:44.002346    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:44.002353    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:44.007360    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:44.007367    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:44.018637    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:27:44.018649    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:27:44.030405    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:44.030416    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:44.042445    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:27:44.042462    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:27:44.056623    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:27:44.056634    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:27:44.070810    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:27:44.070820    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:27:44.088625    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:27:44.088636    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:27:44.099809    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:44.099820    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:44.121947    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:44.121958    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:44.134843    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:44.134854    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:44.158990    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:44.158998    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:44.196575    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:44.196582    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:27:44.231762    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:44.231774    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:44.244863    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:44.244873    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:44.256486    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:44.256497    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:44.268158    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:44.268168    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:46.782456    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:51.784731    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:51.784917    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:51.800030    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:51.800137    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:51.812060    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:51.812138    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:51.823167    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:51.823239    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:51.836315    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:51.836379    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:51.847504    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:51.847573    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:51.857960    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:51.858024    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:51.868406    3721 logs.go:276] 0 containers: []
	W0818 12:27:51.868416    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:51.868471    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:51.878816    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:51.878835    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:51.878841    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:51.883570    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:27:51.883578    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:27:51.898177    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:51.898188    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:51.922789    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:51.922799    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:51.934830    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:51.934842    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:51.959863    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:51.959871    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:51.972338    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:51.972348    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:27:52.010847    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:27:52.010858    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:27:52.029280    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:52.029291    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:52.040913    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:27:52.040927    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:27:52.058504    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:52.058515    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:52.070784    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:52.070797    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:52.110100    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:27:52.110108    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:27:52.121789    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:52.121800    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:52.134062    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:52.134074    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:52.145824    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:52.145836    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:52.157917    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:27:52.157927    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:27:54.671552    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:27:59.672300    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:27:59.672413    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:27:59.684601    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:27:59.684678    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:27:59.696857    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:27:59.696936    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:27:59.709141    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:27:59.709213    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:27:59.732209    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:27:59.732284    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:27:59.744079    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:27:59.744149    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:27:59.761089    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:27:59.761160    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:27:59.772407    3721 logs.go:276] 0 containers: []
	W0818 12:27:59.772418    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:27:59.772489    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:27:59.788546    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:27:59.788565    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:27:59.788571    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:27:59.805798    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:27:59.805810    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:27:59.824344    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:27:59.824357    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:27:59.837692    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:27:59.837703    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:27:59.849902    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:27:59.849915    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:27:59.863493    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:27:59.863505    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:27:59.889391    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:27:59.889408    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:27:59.903204    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:27:59.903215    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:27:59.915948    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:27:59.915960    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:27:59.928962    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:27:59.928974    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:27:59.968285    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:27:59.968303    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:27:59.973140    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:27:59.973149    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:00.010427    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:00.010440    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:00.026562    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:00.026577    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:00.046744    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:00.046758    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:00.062679    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:00.062691    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:00.075592    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:00.075604    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:02.590601    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:07.593040    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:07.593192    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:07.610639    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:07.610708    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:07.621012    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:07.621084    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:07.631093    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:07.631157    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:07.645831    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:07.645900    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:07.658484    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:07.658556    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:07.676509    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:07.676579    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:07.686909    3721 logs.go:276] 0 containers: []
	W0818 12:28:07.686921    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:07.686993    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:07.697398    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:07.697420    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:07.697425    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:07.709316    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:07.709327    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:07.722987    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:07.723000    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:07.740145    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:07.740158    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:07.752539    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:07.752549    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:07.790939    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:07.790950    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:07.802503    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:07.802517    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:07.814524    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:07.814535    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:07.819059    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:07.819068    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:07.853148    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:07.853163    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:07.871771    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:07.871783    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:07.883923    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:07.883935    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:07.898569    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:07.898583    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:07.910481    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:07.910497    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:07.922534    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:07.922546    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:07.934273    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:07.934284    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:07.951426    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:07.951437    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:10.477007    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:15.478971    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:15.479137    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:15.491877    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:15.491951    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:15.503021    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:15.503095    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:15.514875    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:15.514948    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:15.526751    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:15.526830    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:15.538214    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:15.538284    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:15.551649    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:15.551721    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:15.562150    3721 logs.go:276] 0 containers: []
	W0818 12:28:15.562162    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:15.562221    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:15.576961    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:15.576977    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:15.576982    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:15.616818    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:15.616838    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:15.631941    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:15.631950    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:15.646736    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:15.646746    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:15.658412    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:15.658423    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:15.670724    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:15.670736    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:15.683291    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:15.683302    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:15.687813    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:15.687823    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:15.701104    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:15.701117    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:15.720455    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:15.720467    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:15.758204    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:15.758222    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:15.776327    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:15.776340    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:15.788200    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:15.788213    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:15.802569    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:15.802582    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:15.815583    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:15.815596    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:15.834027    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:15.834038    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:15.860396    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:15.860416    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:18.380966    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:23.383737    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:23.383895    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:23.397244    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:23.397319    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:23.409156    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:23.409228    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:23.419649    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:23.419717    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:23.430434    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:23.430506    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:23.441334    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:23.441400    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:23.451995    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:23.452064    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:23.462087    3721 logs.go:276] 0 containers: []
	W0818 12:28:23.462098    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:23.462149    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:23.472314    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:23.472332    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:23.472338    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:23.476629    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:23.476634    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:23.493287    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:23.493300    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:23.505149    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:23.505159    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:23.524220    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:23.524230    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:23.541293    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:23.541304    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:23.566100    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:23.566109    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:23.592019    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:23.592034    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:23.606807    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:23.606819    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:23.618421    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:23.618432    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:23.630642    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:23.630652    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:23.668462    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:23.668472    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:23.686678    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:23.686688    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:23.700352    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:23.700363    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:23.712109    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:23.712121    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:23.754244    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:23.754255    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:23.771753    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:23.771765    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:26.285486    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:31.287937    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:31.288388    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:31.332372    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:31.332499    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:31.356511    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:31.356594    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:31.371517    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:31.371588    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:31.383282    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:31.383355    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:31.393780    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:31.393849    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:31.404707    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:31.404777    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:31.415144    3721 logs.go:276] 0 containers: []
	W0818 12:28:31.415155    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:31.415210    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:31.425753    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:31.425770    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:31.425775    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:31.430532    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:31.430539    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:31.454685    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:31.454695    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:31.466373    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:31.466389    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:31.483716    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:31.483726    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:31.496175    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:31.496186    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:31.507814    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:31.507828    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:31.544408    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:31.544419    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:31.557256    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:31.557268    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:31.572310    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:31.572322    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:31.615003    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:31.615015    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:31.629620    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:31.629632    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:31.641841    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:31.641851    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:31.656022    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:31.656032    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:31.680247    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:31.680255    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:31.691660    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:31.691671    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:31.703998    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:31.704011    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:34.218222    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:39.220343    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:39.220465    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:39.235296    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:39.235370    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:39.247724    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:39.247802    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:39.260290    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:39.260370    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:39.273182    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:39.273257    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:39.286102    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:39.286171    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:39.298028    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:39.298090    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:39.311531    3721 logs.go:276] 0 containers: []
	W0818 12:28:39.311543    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:39.311598    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:39.322860    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:39.322880    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:39.322886    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:39.337240    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:39.337250    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:39.349552    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:39.349564    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:39.388993    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:39.389005    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:39.393852    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:39.393859    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:39.405587    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:39.405601    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:39.423918    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:39.423929    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:39.439615    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:39.439627    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:39.451895    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:39.451905    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:39.463574    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:39.463585    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:39.474655    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:39.474667    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:39.487248    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:39.487261    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:39.498906    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:39.498916    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:39.538498    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:39.538511    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:39.560067    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:39.560083    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:39.574380    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:39.574393    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:39.595317    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:39.595339    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:42.124463    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:47.126779    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:47.127227    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:47.167618    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:47.167751    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:47.190648    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:47.190749    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:47.205997    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:47.206084    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:47.219224    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:47.219293    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:47.234723    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:47.234779    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:47.246970    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:47.247044    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:47.257274    3721 logs.go:276] 0 containers: []
	W0818 12:28:47.257287    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:47.257344    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:47.267826    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:47.267844    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:47.267850    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:47.301798    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:47.301808    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:47.314307    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:47.314322    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:47.326830    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:47.326842    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:47.338714    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:47.338727    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:47.351267    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:47.351280    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:47.364086    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:47.364099    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:47.401621    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:47.401629    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:47.415493    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:47.415504    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:47.427087    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:47.427099    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:47.449551    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:47.449560    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:47.454022    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:47.454031    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:47.468470    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:47.468480    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:47.485706    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:47.485716    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:47.508292    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:47.508303    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:47.525070    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:47.525085    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:47.554359    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:47.554371    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:50.066828    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:55.067694    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:55.067924    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:55.096786    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:55.096910    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:55.114632    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:55.114747    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:55.128478    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:55.128557    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:55.140090    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:55.140163    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:55.150736    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:55.150801    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:55.160884    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:55.160954    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:55.171063    3721 logs.go:276] 0 containers: []
	W0818 12:28:55.171076    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:55.171132    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:55.181760    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:55.181780    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:55.181789    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:55.196128    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:55.196138    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:55.222178    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:55.222188    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:55.233617    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:55.233633    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:55.270754    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:55.270763    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:55.305540    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:55.305552    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:55.318815    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:55.318828    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:55.336435    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:55.336448    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:55.348518    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:55.348530    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:55.360066    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:55.360075    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:55.371623    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:55.371636    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:55.383162    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:55.383171    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:55.396037    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:55.396049    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:55.411021    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:55.411034    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:55.418303    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:55.418318    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:55.434234    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:55.434250    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:55.448207    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:55.448221    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:57.973963    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:02.976131    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:02.976236    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:02.986779    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:29:02.986863    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:02.998038    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:29:02.998133    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:03.009380    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:29:03.009458    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:03.020354    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:29:03.020437    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:03.030393    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:29:03.030455    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:03.045092    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:29:03.045166    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:03.055666    3721 logs.go:276] 0 containers: []
	W0818 12:29:03.055678    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:03.055744    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:03.066988    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:29:03.067010    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:03.067016    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:03.103889    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:29:03.103900    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:29:03.116040    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:29:03.116052    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:29:03.127554    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:29:03.127566    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:29:03.138698    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:03.138710    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:03.173128    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:29:03.173139    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:29:03.187549    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:29:03.187560    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:29:03.204777    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:03.204787    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:03.227131    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:03.227141    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:03.231246    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:29:03.231252    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:29:03.245003    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:29:03.245012    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:29:03.256273    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:29:03.256283    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:03.268097    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:29:03.268108    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:29:03.281479    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:29:03.281490    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:29:03.292554    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:29:03.292566    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:29:03.304384    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:29:03.304396    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:29:03.321512    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:29:03.321522    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:29:05.835103    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:10.835907    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:10.836015    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:10.849399    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:29:10.849473    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:10.860639    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:29:10.860706    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:10.871376    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:29:10.871444    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:10.882775    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:29:10.882849    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:10.893470    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:29:10.893529    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:10.905577    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:29:10.905651    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:10.916002    3721 logs.go:276] 0 containers: []
	W0818 12:29:10.916016    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:10.916072    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:10.926324    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:29:10.926345    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:10.926352    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:10.965782    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:29:10.965797    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:29:10.980743    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:29:10.980760    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:29:10.992656    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:29:10.992667    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:29:11.004021    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:29:11.004033    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:29:11.015166    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:29:11.015201    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:29:11.026361    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:29:11.026376    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:29:11.044279    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:29:11.044289    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:29:11.059699    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:29:11.059712    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:29:11.072110    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:29:11.072124    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:29:11.087077    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:29:11.087091    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:11.098417    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:11.098431    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:11.134646    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:29:11.134656    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:29:11.146035    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:29:11.146046    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:29:11.164019    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:29:11.164030    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:29:11.175595    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:11.175608    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:11.198083    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:11.198091    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:13.704557    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:18.706771    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:18.706936    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:18.719899    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:29:18.719977    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:18.730674    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:29:18.730741    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:18.742690    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:29:18.742759    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:18.755528    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:29:18.755603    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:18.765675    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:29:18.765745    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:18.776619    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:29:18.776685    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:18.786933    3721 logs.go:276] 0 containers: []
	W0818 12:29:18.786945    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:18.787005    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:18.800662    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:29:18.800682    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:18.800687    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:18.836599    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:29:18.836610    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:29:18.848385    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:29:18.848396    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:29:18.865366    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:29:18.865375    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:29:18.877698    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:29:18.877710    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:29:18.891803    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:29:18.891815    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:29:18.904328    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:29:18.904337    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:29:18.930097    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:18.930108    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:18.953372    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:29:18.953381    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:18.966408    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:18.966418    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:19.003484    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:29:19.003492    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:29:19.022248    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:29:19.022261    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:29:19.033896    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:29:19.033908    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:29:19.045377    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:19.045391    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:19.050342    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:29:19.050351    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:29:19.062317    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:29:19.062327    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:29:19.073634    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:29:19.073647    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:29:21.591019    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:26.593437    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:26.593535    3721 kubeadm.go:597] duration metric: took 4m4.060508375s to restartPrimaryControlPlane
	W0818 12:29:26.593583    3721 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 12:29:26.593609    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0818 12:29:27.553550    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:29:27.558712    3721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 12:29:27.561503    3721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 12:29:27.564282    3721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 12:29:27.564288    3721 kubeadm.go:157] found existing configuration files:
	
	I0818 12:29:27.564310    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/admin.conf
	I0818 12:29:27.566735    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 12:29:27.566753    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 12:29:27.569412    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/kubelet.conf
	I0818 12:29:27.572374    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 12:29:27.572394    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 12:29:27.575106    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/controller-manager.conf
	I0818 12:29:27.577671    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 12:29:27.577690    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 12:29:27.580992    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/scheduler.conf
	I0818 12:29:27.583589    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 12:29:27.583609    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 12:29:27.586002    3721 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 12:29:27.603899    3721 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0818 12:29:27.603936    3721 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 12:29:27.650137    3721 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 12:29:27.650190    3721 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 12:29:27.650237    3721 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 12:29:27.700100    3721 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 12:29:27.704069    3721 out.go:235]   - Generating certificates and keys ...
	I0818 12:29:27.704099    3721 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 12:29:27.704131    3721 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 12:29:27.704171    3721 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 12:29:27.704200    3721 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 12:29:27.704234    3721 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 12:29:27.704260    3721 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 12:29:27.704322    3721 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 12:29:27.704349    3721 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 12:29:27.704393    3721 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 12:29:27.704430    3721 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 12:29:27.704455    3721 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 12:29:27.704489    3721 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 12:29:27.843754    3721 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 12:29:28.109279    3721 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 12:29:28.168519    3721 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 12:29:28.233520    3721 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 12:29:28.262042    3721 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 12:29:28.262469    3721 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 12:29:28.262497    3721 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 12:29:28.351817    3721 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 12:29:28.355078    3721 out.go:235]   - Booting up control plane ...
	I0818 12:29:28.355161    3721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 12:29:28.355204    3721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 12:29:28.355238    3721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 12:29:28.355276    3721 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 12:29:28.355593    3721 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 12:29:32.858400    3721 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502554 seconds
	I0818 12:29:32.858528    3721 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 12:29:32.862094    3721 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 12:29:33.372428    3721 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 12:29:33.372553    3721 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-363000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 12:29:33.876485    3721 kubeadm.go:310] [bootstrap-token] Using token: cfss4f.fgmjhgud2ap50126
	I0818 12:29:33.882551    3721 out.go:235]   - Configuring RBAC rules ...
	I0818 12:29:33.882615    3721 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 12:29:33.882665    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 12:29:33.887205    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 12:29:33.888062    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 12:29:33.889414    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 12:29:33.890195    3721 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 12:29:33.893354    3721 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 12:29:34.056219    3721 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 12:29:34.280262    3721 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 12:29:34.280774    3721 kubeadm.go:310] 
	I0818 12:29:34.280813    3721 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 12:29:34.280816    3721 kubeadm.go:310] 
	I0818 12:29:34.280853    3721 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 12:29:34.280857    3721 kubeadm.go:310] 
	I0818 12:29:34.280868    3721 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 12:29:34.280905    3721 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 12:29:34.280933    3721 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 12:29:34.280938    3721 kubeadm.go:310] 
	I0818 12:29:34.280968    3721 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 12:29:34.280973    3721 kubeadm.go:310] 
	I0818 12:29:34.280996    3721 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 12:29:34.280999    3721 kubeadm.go:310] 
	I0818 12:29:34.281026    3721 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 12:29:34.281065    3721 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 12:29:34.281101    3721 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 12:29:34.281105    3721 kubeadm.go:310] 
	I0818 12:29:34.281141    3721 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 12:29:34.281181    3721 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 12:29:34.281184    3721 kubeadm.go:310] 
	I0818 12:29:34.281222    3721 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cfss4f.fgmjhgud2ap50126 \
	I0818 12:29:34.281284    3721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 \
	I0818 12:29:34.281296    3721 kubeadm.go:310] 	--control-plane 
	I0818 12:29:34.281301    3721 kubeadm.go:310] 
	I0818 12:29:34.281345    3721 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 12:29:34.281348    3721 kubeadm.go:310] 
	I0818 12:29:34.281395    3721 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cfss4f.fgmjhgud2ap50126 \
	I0818 12:29:34.281452    3721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 
	I0818 12:29:34.281503    3721 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 12:29:34.281561    3721 cni.go:84] Creating CNI manager for ""
	I0818 12:29:34.281571    3721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:29:34.286028    3721 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 12:29:34.290029    3721 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 12:29:34.293050    3721 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 12:29:34.297847    3721 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 12:29:34.297890    3721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 12:29:34.297908    3721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-363000 minikube.k8s.io/updated_at=2024_08_18T12_29_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=running-upgrade-363000 minikube.k8s.io/primary=true
	I0818 12:29:34.342442    3721 ops.go:34] apiserver oom_adj: -16
	I0818 12:29:34.342454    3721 kubeadm.go:1113] duration metric: took 44.600583ms to wait for elevateKubeSystemPrivileges
	I0818 12:29:34.342463    3721 kubeadm.go:394] duration metric: took 4m11.823123083s to StartCluster
	I0818 12:29:34.342473    3721 settings.go:142] acquiring lock: {Name:mk5a561ec5cb84c336df08f67624cd54d50bdb17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:29:34.342563    3721 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:29:34.342946    3721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/kubeconfig: {Name:mked914f07b3885fd33f9b87dfa58e56ae6bca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:29:34.343155    3721 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:29:34.343160    3721 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:29:34.343190    3721 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-363000"
	I0818 12:29:34.343202    3721 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-363000"
	W0818 12:29:34.343205    3721 addons.go:243] addon storage-provisioner should already be in state true
	I0818 12:29:34.343220    3721 host.go:66] Checking if "running-upgrade-363000" exists ...
	I0818 12:29:34.343222    3721 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-363000"
	I0818 12:29:34.343245    3721 config.go:182] Loaded profile config "running-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:29:34.343265    3721 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-363000"
	I0818 12:29:34.344185    3721 kapi.go:59] client config for running-upgrade-363000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1067e9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:29:34.344302    3721 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-363000"
	W0818 12:29:34.344306    3721 addons.go:243] addon default-storageclass should already be in state true
	I0818 12:29:34.344316    3721 host.go:66] Checking if "running-upgrade-363000" exists ...
	I0818 12:29:34.347053    3721 out.go:177] * Verifying Kubernetes components...
	I0818 12:29:34.347339    3721 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 12:29:34.350122    3721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 12:29:34.350128    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	I0818 12:29:34.353720    3721 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:29:34.357956    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:29:34.360965    3721 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:29:34.360970    3721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 12:29:34.360975    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	I0818 12:29:34.450824    3721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:29:34.455940    3721 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:29:34.455985    3721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:29:34.459904    3721 api_server.go:72] duration metric: took 116.7395ms to wait for apiserver process to appear ...
	I0818 12:29:34.459912    3721 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:29:34.459918    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:34.494946    3721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 12:29:34.544522    3721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:29:34.824440    3721 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 12:29:34.824452    3721 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 12:29:39.462015    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:39.462069    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:44.462458    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:44.462481    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:49.462831    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:49.462855    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:54.463227    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:54.463258    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:59.463504    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:59.463517    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:04.464086    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:04.464122    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0818 12:30:04.826808    3721 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0818 12:30:04.831690    3721 out.go:177] * Enabled addons: storage-provisioner
	I0818 12:30:04.837615    3721 addons.go:510] duration metric: took 30.494708333s for enable addons: enabled=[storage-provisioner]
	I0818 12:30:09.464986    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:09.465028    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:14.466127    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:14.466179    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:19.467769    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:19.467818    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:24.469590    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:24.469635    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:29.471787    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:29.471831    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:34.474095    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:34.474191    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:34.487266    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:30:34.487343    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:34.497867    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:30:34.497937    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:34.508512    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:30:34.508575    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:34.519071    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:30:34.519136    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:34.529589    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:30:34.529658    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:34.545583    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:30:34.545650    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:34.555595    3721 logs.go:276] 0 containers: []
	W0818 12:30:34.555610    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:34.555675    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:34.566785    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:30:34.566800    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:34.566807    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:30:34.599438    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:34.599539    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:34.600425    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:34.600431    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:34.604917    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:34.604924    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:34.640155    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:30:34.640165    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:30:34.655969    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:34.655979    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:34.680698    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:30:34.680709    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:30:34.692084    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:30:34.692094    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:34.703326    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:30:34.703341    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:30:34.722685    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:30:34.722697    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:30:34.736575    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:30:34.736586    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:30:34.748246    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:30:34.748255    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:30:34.759710    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:30:34.759721    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:30:34.774334    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:30:34.774345    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:30:34.792488    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:34.792497    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:30:34.792525    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:30:34.792530    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:34.792533    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:34.792538    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:34.792541    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:30:44.796637    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:49.799019    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:49.799412    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:49.836097    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:30:49.836222    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:49.854297    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:30:49.854382    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:49.868826    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:30:49.868892    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:49.880684    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:30:49.880748    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:49.893518    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:30:49.893584    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:49.904519    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:30:49.904576    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:49.914534    3721 logs.go:276] 0 containers: []
	W0818 12:30:49.914551    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:49.914600    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:49.925108    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:30:49.925122    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:30:49.925128    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:30:49.940316    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:30:49.940330    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:30:49.952492    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:30:49.952502    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:30:49.975269    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:30:49.975283    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:30:49.986997    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:30:49.987006    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:30:50.008269    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:50.008278    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:50.012728    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:50.012736    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:50.050936    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:30:50.050946    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:30:50.064749    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:30:50.064758    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:30:50.076704    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:30:50.076713    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:30:50.088780    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:50.088791    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:50.111614    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:30:50.111621    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:50.123061    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:50.123072    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:30:50.156115    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:50.156223    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:50.157140    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:50.157145    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:30:50.157175    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:30:50.157182    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:50.157186    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:50.157231    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:50.157235    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:00.161268    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:05.163555    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:05.163713    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:05.177592    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:05.177667    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:05.188671    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:05.188745    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:05.199220    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:31:05.199292    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:05.217082    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:05.217149    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:05.229530    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:05.229601    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:05.240270    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:05.240338    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:05.250414    3721 logs.go:276] 0 containers: []
	W0818 12:31:05.250428    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:05.250477    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:05.260659    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:05.260674    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:05.260679    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:05.277978    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:05.277990    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:05.290147    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:05.290158    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:05.324713    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:05.324808    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:05.325742    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:05.325750    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:05.330631    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:05.330640    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:05.342279    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:05.342291    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:05.355334    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:05.355346    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:05.370260    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:05.370271    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:05.382025    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:05.382035    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:05.417141    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:05.417151    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:05.431358    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:05.431368    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:05.445246    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:05.445259    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:05.456747    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:05.456760    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:05.481696    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:05.481705    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:05.481732    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:31:05.481737    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:05.481740    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:05.481744    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:05.481746    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:15.485655    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:20.487366    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:20.487541    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:20.503417    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:20.503502    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:20.516044    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:20.516121    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:20.527340    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:31:20.527413    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:20.537617    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:20.537686    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:20.548016    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:20.548083    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:20.558192    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:20.558263    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:20.567971    3721 logs.go:276] 0 containers: []
	W0818 12:31:20.567986    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:20.568039    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:20.578246    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:20.578262    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:20.578267    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:20.595712    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:20.595723    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:20.629372    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:20.629463    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:20.630340    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:20.630345    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:20.635332    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:20.635340    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:20.670339    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:20.670349    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:20.685065    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:20.685075    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:20.698766    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:20.698777    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:20.710191    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:20.710201    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:20.729489    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:20.729499    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:20.740801    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:20.740811    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:20.763522    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:20.763530    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:20.778500    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:20.778510    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:20.790119    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:20.790130    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:20.801942    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:20.801951    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:20.801979    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:31:20.801984    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:20.801987    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:20.801992    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:20.801994    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:30.806236    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:35.808899    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:35.809078    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:35.830567    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:35.830653    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:35.841734    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:35.841805    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:35.852543    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:31:35.852616    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:35.862694    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:35.862762    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:35.872884    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:35.872956    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:35.883149    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:35.883221    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:35.893257    3721 logs.go:276] 0 containers: []
	W0818 12:31:35.893269    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:35.893331    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:35.903363    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:35.903378    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:35.903383    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:35.907861    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:35.907868    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:35.919709    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:35.919719    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:35.935744    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:35.935754    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:35.947770    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:35.947783    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:35.965608    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:35.965617    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:35.997851    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:35.997944    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:35.998845    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:35.998851    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:36.034377    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:36.034388    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:36.048835    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:36.048847    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:36.062555    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:36.062568    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:36.077238    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:36.077252    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:36.088619    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:36.088630    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:36.111393    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:36.111401    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:36.122799    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:36.122810    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:36.122838    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:31:36.122843    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:36.122847    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:36.122851    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:36.122853    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:46.126888    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:51.129148    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:51.129432    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:51.162024    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:51.162152    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:51.180349    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:51.180447    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:51.194991    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:31:51.195067    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:51.212116    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:51.212182    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:51.223094    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:51.223159    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:51.233995    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:51.234069    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:51.244221    3721 logs.go:276] 0 containers: []
	W0818 12:31:51.244233    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:51.244288    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:51.254832    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:51.254851    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:31:51.254857    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:31:51.266874    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:51.266885    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:51.281875    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:51.281887    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:51.299553    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:51.299563    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:51.331981    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:51.332073    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:51.333002    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:51.333010    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:51.348601    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:51.348610    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:51.372422    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:51.372431    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:51.383937    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:51.383948    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:51.388502    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:31:51.388510    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:31:51.400543    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:51.400556    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:51.412053    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:51.412067    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:51.423454    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:51.423464    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:51.437280    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:51.437291    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:51.451295    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:51.451305    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:51.463204    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:51.463218    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:51.497229    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:51.497242    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:51.497273    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:31:51.497279    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:51.497283    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:51.497286    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:51.497290    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:01.501326    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:06.503534    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:06.503624    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:06.514375    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:06.514446    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:06.525200    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:06.525274    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:06.535887    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:06.535958    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:06.546491    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:06.546555    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:06.558100    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:06.558196    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:06.569044    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:06.569104    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:06.579113    3721 logs.go:276] 0 containers: []
	W0818 12:32:06.579125    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:06.579190    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:06.590236    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:06.590257    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:06.590262    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:06.602615    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:06.602625    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:06.614570    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:06.614580    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:06.619167    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:06.619173    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:06.630977    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:06.630987    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:06.651690    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:06.651699    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:06.686569    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:06.686582    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:06.701066    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:06.701076    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:06.712361    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:06.712372    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:06.727908    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:06.727920    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:06.762059    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:06.762152    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:06.763024    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:06.763030    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:06.777202    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:06.777212    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:06.800763    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:06.800776    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:06.812198    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:06.812213    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:06.825935    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:06.825947    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:06.837710    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:06.837720    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:06.837751    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:32:06.837756    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:06.837760    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:06.837768    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:06.837770    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:16.841794    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:21.843093    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:21.843266    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:21.860255    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:21.860337    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:21.875324    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:21.875404    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:21.887320    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:21.887390    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:21.899334    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:21.899404    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:21.910748    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:21.910807    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:21.922853    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:21.922889    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:21.933892    3721 logs.go:276] 0 containers: []
	W0818 12:32:21.933904    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:21.933968    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:21.945545    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:21.945569    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:21.945576    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:21.958717    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:21.958731    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:21.971595    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:21.971604    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:21.993105    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:21.993115    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:22.020306    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:22.020332    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:22.035735    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:22.035750    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:22.048002    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:22.048013    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:22.065095    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:22.065107    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:22.099212    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:22.099307    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:22.100209    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:22.100216    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:22.104574    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:22.104584    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:22.117539    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:22.117552    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:22.130933    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:22.130946    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:22.149252    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:22.149262    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:22.161544    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:22.161555    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:22.207784    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:22.207797    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:22.223833    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:22.223845    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:22.223872    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:32:22.223876    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:22.223902    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:22.223908    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:22.223911    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:32.227930    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:37.230141    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:37.230311    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:37.251767    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:37.251851    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:37.263449    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:37.263521    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:37.274388    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:37.274469    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:37.284605    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:37.284672    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:37.295020    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:37.295087    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:37.305451    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:37.305518    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:37.315791    3721 logs.go:276] 0 containers: []
	W0818 12:32:37.315803    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:37.315857    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:37.326141    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:37.326166    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:37.326171    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:37.338030    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:37.338041    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:37.349504    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:37.349515    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:37.375237    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:37.375246    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:37.411555    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:37.411565    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:37.429307    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:37.429318    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:37.463579    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:37.463678    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:37.464580    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:37.464587    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:37.477493    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:37.477503    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:37.492206    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:37.492217    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:37.504483    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:37.504493    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:37.509017    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:37.509027    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:37.523027    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:37.523037    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:37.537104    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:37.537113    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:37.549610    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:37.549620    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:37.561205    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:37.561215    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:37.572680    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:37.572690    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:37.572718    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:32:37.572722    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:37.572725    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:37.572728    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:37.572731    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:47.576793    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:52.578054    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:52.578187    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:52.589738    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:52.589812    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:52.600072    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:52.600149    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:52.610574    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:52.610648    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:52.621176    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:52.621243    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:52.631495    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:52.631568    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:52.642242    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:52.642307    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:52.655552    3721 logs.go:276] 0 containers: []
	W0818 12:32:52.655562    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:52.655617    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:52.665557    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:52.665573    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:52.665578    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:52.677467    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:52.677478    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:52.692224    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:52.692236    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:52.703885    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:52.703898    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:52.715075    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:52.715084    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:52.730611    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:52.730623    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:52.745697    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:52.745707    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:52.757375    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:52.757385    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:52.762219    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:52.762224    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:52.773592    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:52.773604    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:52.785411    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:52.785425    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:52.808914    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:52.808923    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:52.841726    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:52.841819    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:52.842762    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:52.842770    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:52.857236    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:52.857249    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:52.892038    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:52.892048    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:52.912279    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:52.912288    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:52.912319    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:32:52.912323    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:52.912327    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:52.912331    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:52.912333    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:33:02.916364    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:07.918667    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:07.918889    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:33:07.944253    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:33:07.944383    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:33:07.962564    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:33:07.962650    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:33:07.975855    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:33:07.975934    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:33:07.987400    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:33:07.987468    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:33:07.998157    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:33:07.998221    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:33:08.009310    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:33:08.009381    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:33:08.019783    3721 logs.go:276] 0 containers: []
	W0818 12:33:08.019795    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:33:08.019853    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:33:08.030708    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:33:08.030725    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:33:08.030730    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:33:08.065742    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:33:08.065754    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:33:08.078037    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:33:08.078050    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:33:08.091231    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:33:08.091242    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:33:08.113650    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:33:08.113660    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:33:08.127766    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:33:08.127775    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:33:08.141755    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:33:08.141769    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:33:08.166412    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:33:08.166423    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:33:08.178073    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:33:08.178084    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:33:08.211779    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:08.211873    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:08.212802    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:33:08.212808    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:33:08.225117    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:33:08.225128    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:33:08.243807    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:33:08.243819    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:33:08.248707    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:33:08.248717    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:33:08.264271    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:33:08.264282    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:33:08.276098    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:33:08.276110    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:33:08.291918    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:08.291932    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:33:08.291959    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:33:08.291964    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:08.291967    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:08.291971    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:08.291974    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:33:18.296035    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:23.298370    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:23.298530    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:33:23.316168    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:33:23.316267    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:33:23.329914    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:33:23.330019    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:33:23.341410    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:33:23.341481    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:33:23.354717    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:33:23.354787    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:33:23.365326    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:33:23.365394    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:33:23.376174    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:33:23.376241    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:33:23.393392    3721 logs.go:276] 0 containers: []
	W0818 12:33:23.393403    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:33:23.393467    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:33:23.403502    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:33:23.403520    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:33:23.403526    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:33:23.414949    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:33:23.414962    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:33:23.430167    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:33:23.430177    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:33:23.441953    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:33:23.441963    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:33:23.447259    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:33:23.447273    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:33:23.500604    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:33:23.500630    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:33:23.515159    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:33:23.515172    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:33:23.527274    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:33:23.527285    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:33:23.539991    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:33:23.540003    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:33:23.551544    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:33:23.551562    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:33:23.574473    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:33:23.574481    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:33:23.586252    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:33:23.586261    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:33:23.597728    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:33:23.597738    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:33:23.632616    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:23.632709    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:23.633638    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:33:23.633648    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:33:23.649596    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:33:23.649614    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:33:23.676222    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:23.676231    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:33:23.676260    3721 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 12:33:23.676266    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:23.676269    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	  Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:23.676272    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:23.676274    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:33:33.669869    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:38.668798    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:38.672505    3721 out.go:201] 
	W0818 12:33:38.676521    3721 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0818 12:33:38.676528    3721 out.go:270] * 
	* 
	W0818 12:33:38.676947    3721 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:33:38.691467    3721 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-363000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-18 12:33:38.764652 -0700 PDT m=+3382.543621542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-363000 -n running-upgrade-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-363000 -n running-upgrade-363000: exit status 2 (15.682549958s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-363000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-574000          | force-systemd-flag-574000 | jenkins | v1.33.1 | 18 Aug 24 12:23 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-172000              | force-systemd-env-172000  | jenkins | v1.33.1 | 18 Aug 24 12:23 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-172000           | force-systemd-env-172000  | jenkins | v1.33.1 | 18 Aug 24 12:23 PDT | 18 Aug 24 12:23 PDT |
	| start   | -p docker-flags-876000                | docker-flags-876000       | jenkins | v1.33.1 | 18 Aug 24 12:23 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-574000             | force-systemd-flag-574000 | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-574000          | force-systemd-flag-574000 | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT | 18 Aug 24 12:24 PDT |
	| start   | -p cert-expiration-172000             | cert-expiration-172000    | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-876000 ssh               | docker-flags-876000       | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-876000 ssh               | docker-flags-876000       | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-876000                | docker-flags-876000       | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT | 18 Aug 24 12:24 PDT |
	| start   | -p cert-options-287000                | cert-options-287000       | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-287000 ssh               | cert-options-287000       | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-287000 -- sudo        | cert-options-287000       | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-287000                | cert-options-287000       | jenkins | v1.33.1 | 18 Aug 24 12:24 PDT | 18 Aug 24 12:24 PDT |
	| start   | -p running-upgrade-363000             | minikube                  | jenkins | v1.26.0 | 18 Aug 24 12:24 PDT | 18 Aug 24 12:25 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-363000             | running-upgrade-363000    | jenkins | v1.33.1 | 18 Aug 24 12:25 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-172000             | cert-expiration-172000    | jenkins | v1.33.1 | 18 Aug 24 12:27 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-172000             | cert-expiration-172000    | jenkins | v1.33.1 | 18 Aug 24 12:27 PDT | 18 Aug 24 12:27 PDT |
	| start   | -p kubernetes-upgrade-288000          | kubernetes-upgrade-288000 | jenkins | v1.33.1 | 18 Aug 24 12:27 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-288000          | kubernetes-upgrade-288000 | jenkins | v1.33.1 | 18 Aug 24 12:27 PDT | 18 Aug 24 12:27 PDT |
	| start   | -p kubernetes-upgrade-288000          | kubernetes-upgrade-288000 | jenkins | v1.33.1 | 18 Aug 24 12:27 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-288000          | kubernetes-upgrade-288000 | jenkins | v1.33.1 | 18 Aug 24 12:27 PDT | 18 Aug 24 12:27 PDT |
	| start   | -p stopped-upgrade-521000             | minikube                  | jenkins | v1.26.0 | 18 Aug 24 12:27 PDT | 18 Aug 24 12:28 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-521000 stop           | minikube                  | jenkins | v1.26.0 | 18 Aug 24 12:28 PDT | 18 Aug 24 12:28 PDT |
	| start   | -p stopped-upgrade-521000             | stopped-upgrade-521000    | jenkins | v1.33.1 | 18 Aug 24 12:28 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 12:28:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 12:28:28.540516    3866 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:28:28.540664    3866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:28:28.540668    3866 out.go:358] Setting ErrFile to fd 2...
	I0818 12:28:28.540671    3866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:28:28.540836    3866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:28:28.541967    3866 out.go:352] Setting JSON to false
	I0818 12:28:28.561404    3866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3478,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:28:28.561488    3866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:28:28.565504    3866 out.go:177] * [stopped-upgrade-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:28:28.573447    3866 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:28:28.573482    3866 notify.go:220] Checking for updates...
	I0818 12:28:28.580440    3866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:28:28.584458    3866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:28:28.587474    3866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:28:28.590356    3866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:28:28.593402    3866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:28:28.596735    3866 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:28:28.598339    3866 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 12:28:28.601440    3866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:28:28.605456    3866 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:28:28.610436    3866 start.go:297] selected driver: qemu2
	I0818 12:28:28.610442    3866 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:28:28.610496    3866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:28:28.613088    3866 cni.go:84] Creating CNI manager for ""
	I0818 12:28:28.613107    3866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:28:28.613135    3866 start.go:340] cluster config:
	{Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:28:28.613183    3866 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:28:28.620405    3866 out.go:177] * Starting "stopped-upgrade-521000" primary control-plane node in "stopped-upgrade-521000" cluster
	I0818 12:28:28.624418    3866 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0818 12:28:28.624433    3866 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0818 12:28:28.624440    3866 cache.go:56] Caching tarball of preloaded images
	I0818 12:28:28.624492    3866 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:28:28.624498    3866 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0818 12:28:28.624546    3866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/config.json ...
	I0818 12:28:28.624987    3866 start.go:360] acquireMachinesLock for stopped-upgrade-521000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:28:28.625015    3866 start.go:364] duration metric: took 22.666µs to acquireMachinesLock for "stopped-upgrade-521000"
	I0818 12:28:28.625025    3866 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:28:28.625029    3866 fix.go:54] fixHost starting: 
	I0818 12:28:28.625141    3866 fix.go:112] recreateIfNeeded on stopped-upgrade-521000: state=Stopped err=<nil>
	W0818 12:28:28.625151    3866 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:28:28.633400    3866 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-521000" ...
	I0818 12:28:26.285486    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:28.637447    3866 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:28:28.637508    3866 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50437-:22,hostfwd=tcp::50438-:2376,hostname=stopped-upgrade-521000 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/disk.qcow2
	I0818 12:28:28.682640    3866 main.go:141] libmachine: STDOUT: 
	I0818 12:28:28.682666    3866 main.go:141] libmachine: STDERR: 
	I0818 12:28:28.682671    3866 main.go:141] libmachine: Waiting for VM to start (ssh -p 50437 docker@127.0.0.1)...
	I0818 12:28:31.287937    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:31.288388    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:31.332372    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:31.332499    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:31.356511    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:31.356594    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:31.371517    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:31.371588    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:31.383282    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:31.383355    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:31.393780    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:31.393849    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:31.404707    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:31.404777    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:31.415144    3721 logs.go:276] 0 containers: []
	W0818 12:28:31.415155    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:31.415210    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:31.425753    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:31.425770    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:31.425775    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:31.430532    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:31.430539    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:31.454685    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:31.454695    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:31.466373    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:31.466389    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:31.483716    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:31.483726    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:31.496175    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:31.496186    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:31.507814    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:31.507828    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:31.544408    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:31.544419    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:31.557256    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:31.557268    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:31.572310    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:31.572322    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:31.615003    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:31.615015    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:31.629620    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:31.629632    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:31.641841    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:31.641851    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:31.656022    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:31.656032    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:31.680247    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:31.680255    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:31.691660    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:31.691671    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:31.703998    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:31.704011    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:34.218222    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:39.220343    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:39.220465    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:39.235296    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:39.235370    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:39.247724    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:39.247802    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:39.260290    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:39.260370    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:39.273182    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:39.273257    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:39.286102    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:39.286171    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:39.298028    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:39.298090    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:39.311531    3721 logs.go:276] 0 containers: []
	W0818 12:28:39.311543    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:39.311598    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:39.322860    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:39.322880    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:39.322886    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:39.337240    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:39.337250    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:39.349552    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:39.349564    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:39.388993    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:39.389005    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:39.393852    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:39.393859    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:39.405587    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:39.405601    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:39.423918    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:39.423929    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:39.439615    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:39.439627    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:39.451895    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:39.451905    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:39.463574    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:39.463585    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:39.474655    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:39.474667    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:39.487248    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:39.487261    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:39.498906    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:39.498916    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:39.538498    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:39.538511    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:39.560067    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:39.560083    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:39.574380    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:39.574393    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:39.595317    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:39.595339    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:42.124463    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:47.126779    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:47.127227    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:47.167618    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:47.167751    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:47.190648    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:47.190749    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:47.205997    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:47.206084    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:47.219224    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:47.219293    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:47.234723    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:47.234779    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:47.246970    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:47.247044    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:47.257274    3721 logs.go:276] 0 containers: []
	W0818 12:28:47.257287    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:47.257344    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:47.267826    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:47.267844    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:47.267850    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:47.301798    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:47.301808    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:47.314307    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:47.314322    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:47.326830    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:47.326842    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:47.338714    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:47.338727    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:47.351267    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:47.351280    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:47.364086    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:47.364099    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:47.401621    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:47.401629    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:47.415493    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:47.415504    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:47.427087    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:47.427099    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:47.449551    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:47.449560    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:47.454022    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:47.454031    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:47.468470    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:47.468480    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:47.485706    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:47.485716    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:47.508292    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:47.508303    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:47.525070    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:47.525085    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:47.554359    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:47.554371    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:48.853911    3866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/config.json ...
	I0818 12:28:48.854156    3866 machine.go:93] provisionDockerMachine start ...
	I0818 12:28:48.854213    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:48.854406    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:48.854413    3866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:28:48.917427    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:28:48.917442    3866 buildroot.go:166] provisioning hostname "stopped-upgrade-521000"
	I0818 12:28:48.917495    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:48.917609    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:48.917616    3866 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-521000 && echo "stopped-upgrade-521000" | sudo tee /etc/hostname
	I0818 12:28:48.979277    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-521000
	
	I0818 12:28:48.979327    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:48.979441    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:48.979449    3866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-521000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-521000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-521000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:28:49.040916    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:28:49.040927    3866 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-984/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-984/.minikube}
	I0818 12:28:49.040934    3866 buildroot.go:174] setting up certificates
	I0818 12:28:49.040938    3866 provision.go:84] configureAuth start
	I0818 12:28:49.040944    3866 provision.go:143] copyHostCerts
	I0818 12:28:49.041015    3866 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem, removing ...
	I0818 12:28:49.041021    3866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem
	I0818 12:28:49.041132    3866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem (1679 bytes)
	I0818 12:28:49.041320    3866 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem, removing ...
	I0818 12:28:49.041323    3866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem
	I0818 12:28:49.041372    3866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem (1078 bytes)
	I0818 12:28:49.041474    3866 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem, removing ...
	I0818 12:28:49.041477    3866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem
	I0818 12:28:49.041518    3866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem (1123 bytes)
	I0818 12:28:49.041609    3866 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-521000 san=[127.0.0.1 localhost minikube stopped-upgrade-521000]
	I0818 12:28:49.115774    3866 provision.go:177] copyRemoteCerts
	I0818 12:28:49.115820    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:28:49.115831    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:28:49.147737    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0818 12:28:49.154298    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0818 12:28:49.161151    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:28:49.168439    3866 provision.go:87] duration metric: took 127.493042ms to configureAuth
	I0818 12:28:49.168448    3866 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:28:49.168560    3866 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:28:49.168591    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.168685    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.168690    3866 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:28:49.226692    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:28:49.226701    3866 buildroot.go:70] root file system type: tmpfs
	I0818 12:28:49.226753    3866 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:28:49.226799    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.226925    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.226962    3866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:28:49.290536    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:28:49.290587    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.290698    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.290707    3866 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:28:49.641164    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:28:49.641178    3866 machine.go:96] duration metric: took 787.020792ms to provisionDockerMachine
	I0818 12:28:49.641185    3866 start.go:293] postStartSetup for "stopped-upgrade-521000" (driver="qemu2")
	I0818 12:28:49.641193    3866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:28:49.641257    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:28:49.641265    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:28:49.675135    3866 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:28:49.676489    3866 info.go:137] Remote host: Buildroot 2021.02.12
	I0818 12:28:49.676497    3866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-984/.minikube/addons for local assets ...
	I0818 12:28:49.676574    3866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-984/.minikube/files for local assets ...
	I0818 12:28:49.676672    3866 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem -> 14592.pem in /etc/ssl/certs
	I0818 12:28:49.676766    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:28:49.679750    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem --> /etc/ssl/certs/14592.pem (1708 bytes)
	I0818 12:28:49.686886    3866 start.go:296] duration metric: took 45.693542ms for postStartSetup
	I0818 12:28:49.686904    3866 fix.go:56] duration metric: took 21.062060625s for fixHost
	I0818 12:28:49.686949    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.687069    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.687075    3866 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:28:49.747920    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724009330.236153754
	
	I0818 12:28:49.747931    3866 fix.go:216] guest clock: 1724009330.236153754
	I0818 12:28:49.747936    3866 fix.go:229] Guest: 2024-08-18 12:28:50.236153754 -0700 PDT Remote: 2024-08-18 12:28:49.686906 -0700 PDT m=+21.175173084 (delta=549.247754ms)
	I0818 12:28:49.747953    3866 fix.go:200] guest clock delta is within tolerance: 549.247754ms
	I0818 12:28:49.747956    3866 start.go:83] releasing machines lock for "stopped-upgrade-521000", held for 21.123122209s
	I0818 12:28:49.748027    3866 ssh_runner.go:195] Run: cat /version.json
	I0818 12:28:49.748037    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:28:49.748027    3866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:28:49.748066    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	W0818 12:28:49.748698    3866 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50437: connect: connection refused
	I0818 12:28:49.748723    3866 retry.go:31] will retry after 368.413037ms: dial tcp [::1]:50437: connect: connection refused
	W0818 12:28:50.169242    3866 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0818 12:28:50.169417    3866 ssh_runner.go:195] Run: systemctl --version
	I0818 12:28:50.173892    3866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:28:50.177873    3866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:28:50.177952    3866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0818 12:28:50.183391    3866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0818 12:28:50.192006    3866 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:28:50.192025    3866 start.go:495] detecting cgroup driver to use...
	I0818 12:28:50.192168    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:28:50.202557    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0818 12:28:50.206444    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:28:50.209896    3866 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:28:50.209931    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:28:50.213657    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:28:50.217345    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:28:50.220931    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:28:50.224603    3866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:28:50.227809    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:28:50.230483    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:28:50.233399    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:28:50.236923    3866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:28:50.239810    3866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:28:50.242353    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:50.326816    3866 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:28:50.337728    3866 start.go:495] detecting cgroup driver to use...
	I0818 12:28:50.337795    3866 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:28:50.343391    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:28:50.348317    3866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:28:50.354366    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:28:50.359164    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:28:50.363681    3866 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:28:50.408908    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:28:50.414001    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:28:50.419287    3866 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:28:50.420557    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:28:50.423373    3866 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0818 12:28:50.428216    3866 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:28:50.503395    3866 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:28:50.566690    3866 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:28:50.566760    3866 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:28:50.571753    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:50.647318    3866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:28:51.786175    3866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.138848625s)
	I0818 12:28:51.786232    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:28:51.794794    3866 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:28:51.801314    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:28:51.806301    3866 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:28:51.869675    3866 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:28:51.949084    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:52.030125    3866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:28:52.036441    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:28:52.041065    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:52.123926    3866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:28:52.163042    3866 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:28:52.163131    3866 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:28:52.165387    3866 start.go:563] Will wait 60s for crictl version
	I0818 12:28:52.165438    3866 ssh_runner.go:195] Run: which crictl
	I0818 12:28:52.166943    3866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:28:52.180924    3866 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0818 12:28:52.180994    3866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:28:52.196482    3866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:28:52.217048    3866 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0818 12:28:52.217188    3866 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0818 12:28:52.218568    3866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:28:52.222100    3866 kubeadm.go:883] updating cluster {Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0818 12:28:52.222146    3866 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0818 12:28:52.222186    3866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:28:52.233943    3866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0818 12:28:52.233952    3866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0818 12:28:52.234002    3866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 12:28:52.237373    3866 ssh_runner.go:195] Run: which lz4
	I0818 12:28:52.238597    3866 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 12:28:52.239829    3866 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 12:28:52.239841    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0818 12:28:53.117329    3866 docker.go:649] duration metric: took 878.7675ms to copy over tarball
	I0818 12:28:53.117391    3866 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 12:28:50.066828    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:54.272726    3866 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155330958s)
	I0818 12:28:54.272739    3866 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 12:28:54.288450    3866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 12:28:54.291340    3866 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0818 12:28:54.296086    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:54.377910    3866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:28:55.963779    3866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.585865458s)
	I0818 12:28:55.963869    3866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:28:55.975179    3866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0818 12:28:55.975190    3866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0818 12:28:55.975195    3866 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 12:28:55.980210    3866 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:55.982417    3866 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:55.984243    3866 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:55.984502    3866 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:55.985975    3866 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:55.985977    3866 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:55.987351    3866 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0818 12:28:55.987372    3866 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:55.988746    3866 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:55.988807    3866 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:55.989852    3866 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:55.990652    3866 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0818 12:28:55.991087    3866 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:55.991215    3866 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:55.992027    3866 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:55.992811    3866 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.421971    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:56.424473    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:56.440083    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0818 12:28:56.440484    3866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0818 12:28:56.440514    3866 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:56.440549    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:56.454144    3866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0818 12:28:56.454165    3866 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:56.454217    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:56.464661    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0818 12:28:56.464733    3866 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0818 12:28:56.464759    3866 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0818 12:28:56.464812    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0818 12:28:56.467682    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0818 12:28:56.471808    3866 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0818 12:28:56.471920    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:56.475558    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:56.477122    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0818 12:28:56.477226    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0818 12:28:56.481903    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:56.486926    3866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0818 12:28:56.486948    3866 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:56.486997    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:56.491367    3866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0818 12:28:56.491387    3866 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:56.491370    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0818 12:28:56.491415    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0818 12:28:56.491433    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:56.500150    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.507880    3866 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0818 12:28:56.507906    3866 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:56.507912    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0818 12:28:56.507958    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:56.508023    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0818 12:28:56.513834    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0818 12:28:56.516799    3866 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0818 12:28:56.516811    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0818 12:28:56.524393    3866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0818 12:28:56.524415    3866 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.524431    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0818 12:28:56.524459    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0818 12:28:56.524468    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.524475    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0818 12:28:56.524551    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0818 12:28:56.568939    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0818 12:28:56.568947    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0818 12:28:56.568967    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0818 12:28:56.568978    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0818 12:28:56.609761    3866 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0818 12:28:56.609785    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0818 12:28:56.646940    3866 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0818 12:28:56.647059    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:56.715995    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0818 12:28:56.716030    3866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0818 12:28:56.716050    3866 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:56.716110    3866 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:56.764417    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 12:28:56.764546    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 12:28:56.777332    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0818 12:28:56.777365    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0818 12:28:56.842756    3866 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 12:28:56.842772    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0818 12:28:57.125925    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 12:28:57.125948    3866 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0818 12:28:57.125957    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0818 12:28:57.277286    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0818 12:28:57.277325    3866 cache_images.go:92] duration metric: took 1.302134333s to LoadCachedImages
	W0818 12:28:57.277368    3866 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0818 12:28:57.277377    3866 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0818 12:28:57.277438    3866 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-521000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:28:57.277506    3866 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:28:57.293590    3866 cni.go:84] Creating CNI manager for ""
	I0818 12:28:57.293602    3866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:28:57.293607    3866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:28:57.293616    3866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-521000 NodeName:stopped-upgrade-521000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:28:57.293684    3866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-521000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:28:57.293734    3866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0818 12:28:57.297197    3866 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:28:57.297224    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 12:28:57.300354    3866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0818 12:28:57.305222    3866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:28:57.310169    3866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0818 12:28:57.315743    3866 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0818 12:28:57.316973    3866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:28:57.320900    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:57.400121    3866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:28:57.407205    3866 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000 for IP: 10.0.2.15
	I0818 12:28:57.407212    3866 certs.go:194] generating shared ca certs ...
	I0818 12:28:57.407221    3866 certs.go:226] acquiring lock for ca certs: {Name:mk3b1337311c50e97f8d40ca44614fc311e1e2eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.407389    3866 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-984/.minikube/ca.key
	I0818 12:28:57.407430    3866 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.key
	I0818 12:28:57.407435    3866 certs.go:256] generating profile certs ...
	I0818 12:28:57.407507    3866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.key
	I0818 12:28:57.407524    3866 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e
	I0818 12:28:57.407547    3866 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0818 12:28:57.539209    3866 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e ...
	I0818 12:28:57.539226    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e: {Name:mk981c85252a31c73892b4889a1884da9e2890a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.539541    3866 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e ...
	I0818 12:28:57.539547    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e: {Name:mk95ec7db0cf2e39e6562e99e65de92f1b4ddd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.539686    3866 certs.go:381] copying /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e -> /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt
	I0818 12:28:57.540160    3866 certs.go:385] copying /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e -> /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key
	I0818 12:28:57.540323    3866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/proxy-client.key
	I0818 12:28:57.540475    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459.pem (1338 bytes)
	W0818 12:28:57.540501    3866 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459_empty.pem, impossibly tiny 0 bytes
	I0818 12:28:57.540509    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:28:57.540547    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem (1078 bytes)
	I0818 12:28:57.540567    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:28:57.540584    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem (1679 bytes)
	I0818 12:28:57.540623    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem (1708 bytes)
	I0818 12:28:57.540968    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:28:57.548445    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 12:28:57.555055    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:28:57.561807    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 12:28:57.568929    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:28:57.576229    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 12:28:57.582918    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:28:57.589524    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:28:57.597015    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:28:57.604355    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459.pem --> /usr/share/ca-certificates/1459.pem (1338 bytes)
	I0818 12:28:57.612193    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1708 bytes)
	I0818 12:28:57.619116    3866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:28:57.623910    3866 ssh_runner.go:195] Run: openssl version
	I0818 12:28:57.625752    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I0818 12:28:57.629456    3866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I0818 12:28:57.631055    3866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:45 /usr/share/ca-certificates/14592.pem
	I0818 12:28:57.631075    3866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I0818 12:28:57.632904    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:28:57.636053    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:28:57.638979    3866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:28:57.640320    3866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:28:57.640340    3866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:28:57.642239    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:28:57.645557    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1459.pem && ln -fs /usr/share/ca-certificates/1459.pem /etc/ssl/certs/1459.pem"
	I0818 12:28:57.648883    3866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1459.pem
	I0818 12:28:57.650331    3866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:45 /usr/share/ca-certificates/1459.pem
	I0818 12:28:57.650349    3866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1459.pem
	I0818 12:28:57.652031    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1459.pem /etc/ssl/certs/51391683.0"
	I0818 12:28:57.654863    3866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:28:57.656299    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:28:57.658133    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:28:57.659997    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:28:57.661774    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:28:57.663765    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:28:57.665535    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:28:57.667363    3866 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:28:57.667428    3866 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:28:57.677887    3866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:28:57.681351    3866 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:28:57.681356    3866 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:28:57.681378    3866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:28:57.685252    3866 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:28:57.685561    3866 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-521000" does not appear in /Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:28:57.685659    3866 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-984/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-521000" cluster setting kubeconfig missing "stopped-upgrade-521000" context setting]
	I0818 12:28:57.685845    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/kubeconfig: {Name:mked914f07b3885fd33f9b87dfa58e56ae6bca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.686318    3866 kapi.go:59] client config for stopped-upgrade-521000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fbd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:28:57.686659    3866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:28:57.689479    3866 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-521000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0818 12:28:57.689484    3866 kubeadm.go:1160] stopping kube-system containers ...
	I0818 12:28:57.689518    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:28:57.700535    3866 docker.go:483] Stopping containers: [949b564f2519 6751986ea10a d4daa11446a6 ed27014bf882 48a2672c14a5 d9e0e5771a1b 78e59ac9d2c3 13aeff4a8a09]
	I0818 12:28:57.700602    3866 ssh_runner.go:195] Run: docker stop 949b564f2519 6751986ea10a d4daa11446a6 ed27014bf882 48a2672c14a5 d9e0e5771a1b 78e59ac9d2c3 13aeff4a8a09
	I0818 12:28:57.711095    3866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 12:28:57.716652    3866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 12:28:57.719947    3866 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 12:28:57.719956    3866 kubeadm.go:157] found existing configuration files:
	
	I0818 12:28:57.719994    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf
	I0818 12:28:57.722901    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 12:28:57.722944    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 12:28:57.725514    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf
	I0818 12:28:57.728137    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 12:28:57.728162    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 12:28:57.730573    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf
	I0818 12:28:57.733221    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 12:28:57.733242    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 12:28:57.736365    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf
	I0818 12:28:57.738850    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 12:28:57.738872    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 12:28:57.741578    3866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 12:28:57.744651    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:57.767535    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.202445    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.349904    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.372361    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.394359    3866 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:28:58.394438    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:28:55.067694    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:28:55.067924    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:28:55.096786    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:28:55.096910    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:28:55.114632    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:28:55.114747    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:28:55.128478    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:28:55.128557    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:28:55.140090    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:28:55.140163    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:28:55.150736    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:28:55.150801    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:28:55.160884    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:28:55.160954    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:28:55.171063    3721 logs.go:276] 0 containers: []
	W0818 12:28:55.171076    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:28:55.171132    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:28:55.181760    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:28:55.181780    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:28:55.181789    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:28:55.196128    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:28:55.196138    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:28:55.222178    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:28:55.222188    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:28:55.233617    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:28:55.233633    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:28:55.270754    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:28:55.270763    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:28:55.305540    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:28:55.305552    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:28:55.318815    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:28:55.318828    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:28:55.336435    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:28:55.336448    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:28:55.348518    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:28:55.348530    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:28:55.360066    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:28:55.360075    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:28:55.371623    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:28:55.371636    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:28:55.383162    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:28:55.383171    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:28:55.396037    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:28:55.396049    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:28:55.411021    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:28:55.411034    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:28:55.418303    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:28:55.418318    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:28:55.434234    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:28:55.434250    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:28:55.448207    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:28:55.448221    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:28:57.973963    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:28:58.896533    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:28:59.396451    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:28:59.400826    3866 api_server.go:72] duration metric: took 1.006476042s to wait for apiserver process to appear ...
	I0818 12:28:59.400835    3866 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:28:59.400850    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:02.976131    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:02.976236    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:02.986779    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:29:02.986863    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:02.998038    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:29:02.998133    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:03.009380    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:29:03.009458    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:03.020354    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:29:03.020437    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:03.030393    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:29:03.030455    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:03.045092    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:29:03.045166    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:03.055666    3721 logs.go:276] 0 containers: []
	W0818 12:29:03.055678    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:03.055744    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:03.066988    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:29:03.067010    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:03.067016    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:03.103889    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:29:03.103900    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:29:03.116040    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:29:03.116052    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:29:03.127554    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:29:03.127566    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:29:03.138698    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:03.138710    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:03.173128    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:29:03.173139    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:29:03.187549    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:29:03.187560    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:29:03.204777    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:03.204787    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:03.227131    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:03.227141    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:03.231246    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:29:03.231252    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:29:03.245003    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:29:03.245012    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:29:03.256273    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:29:03.256283    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:03.268097    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:29:03.268108    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:29:03.281479    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:29:03.281490    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:29:03.292554    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:29:03.292566    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:29:03.304384    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:29:03.304396    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:29:03.321512    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:29:03.321522    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:29:04.402960    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:04.403010    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:05.835103    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:09.403478    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:09.403513    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:10.835907    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:10.836015    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:10.849399    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:29:10.849473    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:10.860639    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:29:10.860706    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:10.871376    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:29:10.871444    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:10.882775    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:29:10.882849    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:10.893470    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:29:10.893529    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:10.905577    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:29:10.905651    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:10.916002    3721 logs.go:276] 0 containers: []
	W0818 12:29:10.916016    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:10.916072    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:10.926324    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:29:10.926345    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:10.926352    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:10.965782    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:29:10.965797    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:29:10.980743    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:29:10.980760    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:29:10.992656    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:29:10.992667    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:29:11.004021    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:29:11.004033    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:29:11.015166    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:29:11.015201    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:29:11.026361    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:29:11.026376    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:29:11.044279    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:29:11.044289    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:29:11.059699    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:29:11.059712    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:29:11.072110    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:29:11.072124    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:29:11.087077    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:29:11.087091    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:11.098417    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:11.098431    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:11.134646    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:29:11.134656    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:29:11.146035    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:29:11.146046    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:29:11.164019    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:29:11.164030    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:29:11.175595    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:11.175608    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:11.198083    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:11.198091    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:13.704557    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:14.403892    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:14.403929    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:18.706771    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:18.706936    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:18.719899    3721 logs.go:276] 2 containers: [f4408189a16b ce6f54feb45f]
	I0818 12:29:18.719977    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:18.730674    3721 logs.go:276] 2 containers: [ec2a49bf8d72 6c149833de79]
	I0818 12:29:18.730741    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:18.742690    3721 logs.go:276] 1 containers: [95128c67f594]
	I0818 12:29:18.742759    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:18.755528    3721 logs.go:276] 2 containers: [4abe37d3f920 0219e3a900cb]
	I0818 12:29:18.755603    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:18.765675    3721 logs.go:276] 1 containers: [77d1e703b04a]
	I0818 12:29:18.765745    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:18.776619    3721 logs.go:276] 2 containers: [5fd8ca21e473 f7e9dad21f3c]
	I0818 12:29:18.776685    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:18.786933    3721 logs.go:276] 0 containers: []
	W0818 12:29:18.786945    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:18.787005    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:18.800662    3721 logs.go:276] 2 containers: [6d846616abd4 b7f45198e09a]
	I0818 12:29:18.800682    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:18.800687    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:18.836599    3721 logs.go:123] Gathering logs for kube-proxy [77d1e703b04a] ...
	I0818 12:29:18.836610    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77d1e703b04a"
	I0818 12:29:18.848385    3721 logs.go:123] Gathering logs for kube-controller-manager [5fd8ca21e473] ...
	I0818 12:29:18.848396    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fd8ca21e473"
	I0818 12:29:18.865366    3721 logs.go:123] Gathering logs for storage-provisioner [b7f45198e09a] ...
	I0818 12:29:18.865375    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7f45198e09a"
	I0818 12:29:18.877698    3721 logs.go:123] Gathering logs for kube-apiserver [f4408189a16b] ...
	I0818 12:29:18.877710    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4408189a16b"
	I0818 12:29:18.891803    3721 logs.go:123] Gathering logs for kube-apiserver [ce6f54feb45f] ...
	I0818 12:29:18.891815    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce6f54feb45f"
	I0818 12:29:18.904328    3721 logs.go:123] Gathering logs for etcd [ec2a49bf8d72] ...
	I0818 12:29:18.904337    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec2a49bf8d72"
	I0818 12:29:18.930097    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:18.930108    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:18.953372    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:29:18.953381    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:18.966408    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:18.966418    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:19.003484    3721 logs.go:123] Gathering logs for etcd [6c149833de79] ...
	I0818 12:29:19.003492    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c149833de79"
	I0818 12:29:19.022248    3721 logs.go:123] Gathering logs for coredns [95128c67f594] ...
	I0818 12:29:19.022261    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95128c67f594"
	I0818 12:29:19.033896    3721 logs.go:123] Gathering logs for storage-provisioner [6d846616abd4] ...
	I0818 12:29:19.033908    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d846616abd4"
	I0818 12:29:19.045377    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:19.045391    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:19.050342    3721 logs.go:123] Gathering logs for kube-scheduler [4abe37d3f920] ...
	I0818 12:29:19.050351    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abe37d3f920"
	I0818 12:29:19.062317    3721 logs.go:123] Gathering logs for kube-scheduler [0219e3a900cb] ...
	I0818 12:29:19.062327    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0219e3a900cb"
	I0818 12:29:19.073634    3721 logs.go:123] Gathering logs for kube-controller-manager [f7e9dad21f3c] ...
	I0818 12:29:19.073647    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e9dad21f3c"
	I0818 12:29:19.404442    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:19.404481    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:21.591019    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:26.593437    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:26.593535    3721 kubeadm.go:597] duration metric: took 4m4.060508375s to restartPrimaryControlPlane
	W0818 12:29:26.593583    3721 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 12:29:26.593609    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0818 12:29:27.553550    3721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:29:27.558712    3721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 12:29:27.561503    3721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 12:29:27.564282    3721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 12:29:27.564288    3721 kubeadm.go:157] found existing configuration files:
	
	I0818 12:29:27.564310    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/admin.conf
	I0818 12:29:27.566735    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 12:29:27.566753    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 12:29:27.569412    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/kubelet.conf
	I0818 12:29:27.572374    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 12:29:27.572394    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 12:29:27.575106    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/controller-manager.conf
	I0818 12:29:27.577671    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 12:29:27.577690    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 12:29:27.580992    3721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/scheduler.conf
	I0818 12:29:27.583589    3721 kubeadm.go:163] "https://control-plane.minikube.internal:50258" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50258 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 12:29:27.583609    3721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 12:29:27.586002    3721 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 12:29:27.603899    3721 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0818 12:29:27.603936    3721 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 12:29:27.650137    3721 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 12:29:27.650190    3721 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 12:29:27.650237    3721 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 12:29:27.700100    3721 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 12:29:27.704069    3721 out.go:235]   - Generating certificates and keys ...
	I0818 12:29:27.704099    3721 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 12:29:27.704131    3721 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 12:29:27.704171    3721 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 12:29:27.704200    3721 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 12:29:27.704234    3721 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 12:29:27.704260    3721 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 12:29:27.704322    3721 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 12:29:27.704349    3721 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 12:29:27.704393    3721 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 12:29:27.704430    3721 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 12:29:27.704455    3721 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 12:29:27.704489    3721 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 12:29:27.843754    3721 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 12:29:28.109279    3721 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 12:29:28.168519    3721 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 12:29:28.233520    3721 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 12:29:28.262042    3721 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 12:29:28.262469    3721 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 12:29:28.262497    3721 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 12:29:28.351817    3721 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 12:29:24.405525    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:24.405581    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:28.355078    3721 out.go:235]   - Booting up control plane ...
	I0818 12:29:28.355161    3721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 12:29:28.355204    3721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 12:29:28.355238    3721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 12:29:28.355276    3721 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 12:29:28.355593    3721 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 12:29:29.406686    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:29.406724    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:32.858400    3721 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502554 seconds
	I0818 12:29:32.858528    3721 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 12:29:32.862094    3721 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 12:29:33.372428    3721 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 12:29:33.372553    3721 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-363000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 12:29:33.876485    3721 kubeadm.go:310] [bootstrap-token] Using token: cfss4f.fgmjhgud2ap50126
	I0818 12:29:33.882551    3721 out.go:235]   - Configuring RBAC rules ...
	I0818 12:29:33.882615    3721 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 12:29:33.882665    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 12:29:33.887205    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 12:29:33.888062    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 12:29:33.889414    3721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 12:29:33.890195    3721 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 12:29:33.893354    3721 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 12:29:34.056219    3721 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 12:29:34.280262    3721 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 12:29:34.280774    3721 kubeadm.go:310] 
	I0818 12:29:34.280813    3721 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 12:29:34.280816    3721 kubeadm.go:310] 
	I0818 12:29:34.280853    3721 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 12:29:34.280857    3721 kubeadm.go:310] 
	I0818 12:29:34.280868    3721 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 12:29:34.280905    3721 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 12:29:34.280933    3721 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 12:29:34.280938    3721 kubeadm.go:310] 
	I0818 12:29:34.280968    3721 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 12:29:34.280973    3721 kubeadm.go:310] 
	I0818 12:29:34.280996    3721 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 12:29:34.280999    3721 kubeadm.go:310] 
	I0818 12:29:34.281026    3721 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 12:29:34.281065    3721 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 12:29:34.281101    3721 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 12:29:34.281105    3721 kubeadm.go:310] 
	I0818 12:29:34.281141    3721 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 12:29:34.281181    3721 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 12:29:34.281184    3721 kubeadm.go:310] 
	I0818 12:29:34.281222    3721 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cfss4f.fgmjhgud2ap50126 \
	I0818 12:29:34.281284    3721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 \
	I0818 12:29:34.281296    3721 kubeadm.go:310] 	--control-plane 
	I0818 12:29:34.281301    3721 kubeadm.go:310] 
	I0818 12:29:34.281345    3721 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 12:29:34.281348    3721 kubeadm.go:310] 
	I0818 12:29:34.281395    3721 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cfss4f.fgmjhgud2ap50126 \
	I0818 12:29:34.281452    3721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 
	I0818 12:29:34.281503    3721 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 12:29:34.281561    3721 cni.go:84] Creating CNI manager for ""
	I0818 12:29:34.281571    3721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:29:34.286028    3721 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 12:29:34.290029    3721 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 12:29:34.293050    3721 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 12:29:34.297847    3721 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 12:29:34.297890    3721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 12:29:34.297908    3721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-363000 minikube.k8s.io/updated_at=2024_08_18T12_29_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=running-upgrade-363000 minikube.k8s.io/primary=true
	I0818 12:29:34.342442    3721 ops.go:34] apiserver oom_adj: -16
	I0818 12:29:34.342454    3721 kubeadm.go:1113] duration metric: took 44.600583ms to wait for elevateKubeSystemPrivileges
	I0818 12:29:34.342463    3721 kubeadm.go:394] duration metric: took 4m11.823123083s to StartCluster
	I0818 12:29:34.342473    3721 settings.go:142] acquiring lock: {Name:mk5a561ec5cb84c336df08f67624cd54d50bdb17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:29:34.342563    3721 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:29:34.342946    3721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/kubeconfig: {Name:mked914f07b3885fd33f9b87dfa58e56ae6bca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:29:34.343155    3721 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:29:34.343160    3721 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:29:34.343190    3721 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-363000"
	I0818 12:29:34.343202    3721 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-363000"
	W0818 12:29:34.343205    3721 addons.go:243] addon storage-provisioner should already be in state true
	I0818 12:29:34.343220    3721 host.go:66] Checking if "running-upgrade-363000" exists ...
	I0818 12:29:34.343222    3721 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-363000"
	I0818 12:29:34.343245    3721 config.go:182] Loaded profile config "running-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:29:34.343265    3721 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-363000"
	I0818 12:29:34.344185    3721 kapi.go:59] client config for running-upgrade-363000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/running-upgrade-363000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1067e9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:29:34.344302    3721 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-363000"
	W0818 12:29:34.344306    3721 addons.go:243] addon default-storageclass should already be in state true
	I0818 12:29:34.344316    3721 host.go:66] Checking if "running-upgrade-363000" exists ...
	I0818 12:29:34.347053    3721 out.go:177] * Verifying Kubernetes components...
	I0818 12:29:34.347339    3721 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 12:29:34.350122    3721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 12:29:34.350128    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	I0818 12:29:34.353720    3721 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:29:34.357956    3721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:29:34.360965    3721 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:29:34.360970    3721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 12:29:34.360975    3721 sshutil.go:53] new ssh client: &{IP:localhost Port:50226 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/running-upgrade-363000/id_rsa Username:docker}
	I0818 12:29:34.450824    3721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:29:34.455940    3721 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:29:34.455985    3721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:29:34.459904    3721 api_server.go:72] duration metric: took 116.7395ms to wait for apiserver process to appear ...
	I0818 12:29:34.459912    3721 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:29:34.459918    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:34.494946    3721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 12:29:34.544522    3721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:29:34.824440    3721 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 12:29:34.824452    3721 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 12:29:34.407891    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:34.407914    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:39.462015    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:39.462069    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:39.409394    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:39.409444    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:44.462458    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:44.462481    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:44.411490    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:44.411544    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:49.462831    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:49.462855    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:49.413783    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:49.413807    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:54.463227    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:54.463258    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:54.415961    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:54.416009    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:59.463504    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:59.463517    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:59.418263    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:59.418389    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:59.429658    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:29:59.429738    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:59.440858    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:29:59.440921    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:59.451489    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:29:59.451555    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:59.463345    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:29:59.463415    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:59.478970    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:29:59.479037    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:59.491345    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:29:59.491406    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:59.502202    3866 logs.go:276] 0 containers: []
	W0818 12:29:59.502213    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:59.502266    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:59.513427    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:29:59.513446    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:29:59.513452    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:29:59.525450    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:59.525465    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:59.551900    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:29:59.551915    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:29:59.563454    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:29:59.563466    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:29:59.580220    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:29:59.580234    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:29:59.597441    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:59.597453    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:59.635390    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:59.635398    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:59.639553    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:59.639562    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:59.717784    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:29:59.717796    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:29:59.733298    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:29:59.733311    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:29:59.747374    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:29:59.747387    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:29:59.758538    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:29:59.758552    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:29:59.770218    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:29:59.770229    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:59.782715    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:29:59.782725    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:29:59.812188    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:29:59.812199    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:29:59.827368    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:29:59.827378    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:29:59.843397    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:29:59.843410    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:02.357811    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:04.464086    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:04.464122    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0818 12:30:04.826808    3721 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0818 12:30:04.831690    3721 out.go:177] * Enabled addons: storage-provisioner
	I0818 12:30:04.837615    3721 addons.go:510] duration metric: took 30.494708333s for enable addons: enabled=[storage-provisioner]
	I0818 12:30:07.360035    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:07.360337    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:07.384695    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:07.384799    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:07.401525    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:07.401622    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:07.418586    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:07.418661    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:07.429440    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:07.429508    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:07.439965    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:07.440024    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:07.449820    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:07.449888    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:07.459766    3866 logs.go:276] 0 containers: []
	W0818 12:30:07.459777    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:07.459837    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:07.471389    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:07.471406    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:07.471412    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:07.475728    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:07.475736    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:07.515607    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:07.515622    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:07.530480    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:07.530491    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:07.547159    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:07.547168    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:07.559092    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:07.559103    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:07.586040    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:07.586052    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:07.599792    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:07.599804    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:07.636667    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:07.636676    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:07.654125    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:07.654136    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:07.666683    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:07.666692    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:07.677928    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:07.677939    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:07.702284    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:07.702298    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:07.718759    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:07.718770    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:07.732990    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:07.733001    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:07.755377    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:07.755392    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:07.770921    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:07.770930    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:09.464986    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:09.465028    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:10.285572    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:14.466127    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:14.466179    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:15.287127    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:15.287332    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:15.313065    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:15.313193    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:15.330439    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:15.330523    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:15.344233    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:15.344301    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:15.356030    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:15.356115    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:15.367211    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:15.367279    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:15.377621    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:15.377683    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:15.388249    3866 logs.go:276] 0 containers: []
	W0818 12:30:15.388262    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:15.388320    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:15.406562    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:15.406583    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:15.406588    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:15.418797    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:15.418808    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:15.443680    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:15.443688    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:15.455599    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:15.455611    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:15.494476    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:15.494484    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:15.518003    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:15.518014    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:15.542525    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:15.542537    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:15.557450    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:15.557461    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:15.569338    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:15.569350    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:15.604315    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:15.604327    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:15.629737    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:15.629748    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:15.643708    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:15.643717    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:15.660825    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:15.660840    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:15.676074    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:15.676087    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:15.680534    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:15.680542    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:15.691568    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:15.691579    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:15.702710    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:15.702721    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:18.215460    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:19.467769    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:19.467818    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:23.217780    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:23.217896    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:23.230366    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:23.230445    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:23.241394    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:23.241463    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:23.256036    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:23.256106    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:23.266616    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:23.266676    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:23.276368    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:23.276434    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:23.286643    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:23.286717    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:23.297239    3866 logs.go:276] 0 containers: []
	W0818 12:30:23.297252    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:23.297312    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:23.307891    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:23.307911    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:23.307916    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:23.343953    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:23.343967    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:23.355657    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:23.355669    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:23.371612    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:23.371624    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:23.387781    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:23.387797    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:23.399789    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:23.399802    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:23.410726    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:23.410737    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:23.443931    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:23.443943    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:23.458139    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:23.458152    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:23.472597    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:23.472608    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:23.490428    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:23.490438    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:23.515242    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:23.515252    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:24.469590    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:24.469635    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:23.553422    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:23.553435    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:23.565697    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:23.565707    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:23.578911    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:23.578926    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:23.583449    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:23.583457    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:23.601954    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:23.601966    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:26.115668    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:29.471787    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:29.471831    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:31.118213    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:31.118498    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:31.145768    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:31.145896    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:31.163003    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:31.163085    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:31.176332    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:31.176405    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:31.188108    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:31.188183    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:31.198370    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:31.198443    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:31.208463    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:31.208524    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:31.222881    3866 logs.go:276] 0 containers: []
	W0818 12:30:31.222893    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:31.222949    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:31.233691    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:31.233708    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:31.233714    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:31.248061    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:31.248071    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:31.259431    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:31.259444    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:31.275358    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:31.275370    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:31.289301    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:31.289313    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:31.301054    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:31.301065    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:31.312395    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:31.312408    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:31.324416    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:31.324427    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:31.342494    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:31.342506    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:31.381729    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:31.381742    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:31.386148    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:31.386158    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:31.423738    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:31.423749    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:31.448757    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:31.448768    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:31.463004    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:31.463013    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:31.474340    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:31.474354    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:31.489279    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:31.489291    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:31.501680    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:31.501694    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:34.474095    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:34.474191    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:34.487266    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:30:34.487343    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:34.497867    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:30:34.497937    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:34.508512    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:30:34.508575    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:34.519071    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:30:34.519136    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:34.529589    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:30:34.529658    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:34.545583    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:30:34.545650    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:34.555595    3721 logs.go:276] 0 containers: []
	W0818 12:30:34.555610    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:34.555675    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:34.566785    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:30:34.566800    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:34.566807    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:30:34.599438    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:34.599539    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:34.600425    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:34.600431    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:34.604917    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:34.604924    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:34.640155    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:30:34.640165    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:30:34.655969    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:34.655979    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:34.680698    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:30:34.680709    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:30:34.692084    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:30:34.692094    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:34.703326    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:30:34.703341    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:30:34.722685    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:30:34.722697    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:30:34.736575    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:30:34.736586    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:30:34.748246    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:30:34.748255    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:30:34.759710    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:30:34.759721    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:30:34.774334    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:30:34.774345    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:30:34.792488    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:34.792497    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:30:34.792525    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:30:34.792530    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:34.792533    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:34.792538    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:34.792541    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:30:34.026669    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:39.028983    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:39.029213    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:39.052123    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:39.052240    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:39.069719    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:39.069792    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:39.081924    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:39.081993    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:39.092873    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:39.092947    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:39.103033    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:39.103098    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:39.120041    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:39.120107    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:39.131081    3866 logs.go:276] 0 containers: []
	W0818 12:30:39.131092    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:39.131152    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:39.141383    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:39.141402    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:39.141407    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:39.161434    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:39.161448    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:39.175369    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:39.175382    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:39.190065    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:39.190076    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:39.200962    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:39.200971    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:39.213412    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:39.213424    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:39.217936    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:39.217946    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:39.252404    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:39.252415    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:39.264939    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:39.264951    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:39.301538    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:39.301548    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:39.326260    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:39.326272    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:39.348077    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:39.348091    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:39.360773    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:39.360786    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:39.385809    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:39.385817    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:39.400891    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:39.400903    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:39.411780    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:39.411791    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:39.423068    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:39.423081    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:41.935250    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:44.796637    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:46.937590    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:46.937787    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:46.955871    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:46.955940    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:46.967850    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:46.967933    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:46.978303    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:46.978370    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:46.988905    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:46.988980    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:47.003081    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:47.003151    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:47.014101    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:47.014164    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:47.024953    3866 logs.go:276] 0 containers: []
	W0818 12:30:47.024965    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:47.025020    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:47.035524    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:47.035541    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:47.035548    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:47.051359    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:47.051371    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:47.063400    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:47.063412    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:47.074997    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:47.075009    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:47.086416    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:47.086429    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:47.111203    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:47.111213    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:47.122932    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:47.122950    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:47.127033    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:47.127038    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:47.156005    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:47.156013    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:47.174040    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:47.174051    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:47.186350    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:47.186368    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:47.224651    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:47.224668    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:47.253693    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:47.253703    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:47.266705    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:47.266716    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:47.284456    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:47.284465    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:47.321226    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:47.321237    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:47.332708    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:47.332718    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:49.799019    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:49.799412    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:49.836097    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:30:49.836222    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:49.854297    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:30:49.854382    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:49.868826    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:30:49.868892    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:49.880684    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:30:49.880748    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:49.893518    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:30:49.893584    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:49.904519    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:30:49.904576    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:49.914534    3721 logs.go:276] 0 containers: []
	W0818 12:30:49.914551    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:49.914600    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:49.925108    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:30:49.925122    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:30:49.925128    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:30:49.940316    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:30:49.940330    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:30:49.952492    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:30:49.952502    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:30:49.975269    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:30:49.975283    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:30:49.986997    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:30:49.987006    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:30:50.008269    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:50.008278    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:50.012728    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:50.012736    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:50.050936    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:30:50.050946    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:30:49.848682    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:50.064749    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:30:50.064758    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:30:50.076704    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:30:50.076713    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:30:50.088780    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:50.088791    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:50.111614    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:30:50.111621    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:50.123061    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:50.123072    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:30:50.156115    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:50.156223    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:50.157140    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:50.157145    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:30:50.157175    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:30:50.157182    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:30:50.157186    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:30:50.157231    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:30:50.157235    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:30:54.850969    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:54.851158    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:54.877241    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:54.877358    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:54.893831    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:54.893907    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:54.906718    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:54.906789    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:54.918295    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:54.918355    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:54.930002    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:54.930064    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:54.940609    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:54.940676    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:54.950379    3866 logs.go:276] 0 containers: []
	W0818 12:30:54.950390    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:54.950447    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:54.961138    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:54.961156    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:54.961162    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:54.972675    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:54.972689    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:54.990511    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:54.990525    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:55.029438    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:55.029450    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:55.045257    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:55.045267    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:55.057145    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:55.057156    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:55.068850    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:55.068862    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:55.081331    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:55.081342    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:55.085580    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:55.085586    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:55.138483    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:55.138496    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:55.162550    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:55.162561    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:55.176680    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:55.176693    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:55.190978    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:55.190991    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:55.202295    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:55.202310    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:55.219819    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:55.219829    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:55.237242    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:55.237253    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:55.249396    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:55.249431    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:57.775006    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:02.777455    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:02.777720    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:02.806737    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:02.806875    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:02.823391    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:02.823473    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:02.836840    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:02.836914    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:02.851194    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:02.851264    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:02.863716    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:02.863790    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:02.874725    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:02.874792    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:02.886613    3866 logs.go:276] 0 containers: []
	W0818 12:31:02.886624    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:02.886682    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:02.897202    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:02.897226    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:02.897232    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:02.922225    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:02.922238    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:02.937643    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:02.937656    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:02.949608    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:02.949620    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:02.987885    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:02.987895    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:03.023438    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:03.023448    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:03.039292    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:03.039303    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:03.051179    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:03.051191    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:03.063435    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:03.063448    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:03.074761    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:03.074772    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:03.088956    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:03.088966    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:03.103742    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:03.103756    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:03.115729    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:03.115741    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:03.133682    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:03.133695    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:03.145015    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:03.145024    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:03.170174    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:03.170184    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:03.174169    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:03.174176    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:00.161268    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:05.689745    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:05.163555    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:05.163713    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:05.177592    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:05.177667    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:05.188671    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:05.188745    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:05.199220    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:31:05.199292    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:05.217082    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:05.217149    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:05.229530    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:05.229601    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:05.240270    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:05.240338    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:05.250414    3721 logs.go:276] 0 containers: []
	W0818 12:31:05.250428    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:05.250477    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:05.260659    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:05.260674    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:05.260679    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:05.277978    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:05.277990    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:05.290147    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:05.290158    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:05.324713    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:05.324808    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:05.325742    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:05.325750    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:05.330631    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:05.330640    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:05.342279    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:05.342291    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:05.355334    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:05.355346    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:05.370260    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:05.370271    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:05.382025    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:05.382035    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:05.417141    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:05.417151    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:05.431358    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:05.431368    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:05.445246    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:05.445259    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:05.456747    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:05.456760    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:05.481696    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:05.481705    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:05.481732    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:31:05.481737    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:05.481740    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:05.481744    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:05.481746    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:10.692051    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:10.692190    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:10.704541    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:10.704619    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:10.715239    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:10.715310    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:10.725704    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:10.725778    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:10.736500    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:10.736572    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:10.747101    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:10.747172    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:10.757588    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:10.757660    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:10.768152    3866 logs.go:276] 0 containers: []
	W0818 12:31:10.768163    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:10.768224    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:10.778589    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:10.778609    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:10.778615    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:10.782731    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:10.782739    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:10.801294    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:10.801303    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:10.812920    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:10.812931    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:10.852144    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:10.852155    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:10.871482    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:10.871496    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:10.884585    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:10.884598    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:10.922446    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:10.922460    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:10.934640    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:10.934652    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:10.946399    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:10.946410    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:10.963520    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:10.963531    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:10.975655    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:10.975668    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:10.986922    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:10.986932    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:10.998015    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:10.998027    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:11.012405    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:11.012415    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:11.037095    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:11.037105    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:11.061434    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:11.061442    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:13.577541    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:15.485655    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:18.579130    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:18.579253    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:18.594819    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:18.594893    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:18.605786    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:18.605861    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:18.616693    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:18.616758    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:18.627348    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:18.627417    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:18.638030    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:18.638095    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:18.648530    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:18.648591    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:18.664117    3866 logs.go:276] 0 containers: []
	W0818 12:31:18.664127    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:18.664182    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:18.674847    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:18.674869    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:18.674875    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:18.686606    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:18.686622    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:18.708900    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:18.708908    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:18.733624    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:18.733635    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:18.745292    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:18.745302    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:18.760752    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:18.760763    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:18.772585    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:18.772600    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:18.783825    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:18.783839    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:18.796507    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:18.796519    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:18.811147    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:18.811158    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:18.827158    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:18.827171    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:18.847250    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:18.847261    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:18.858610    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:18.858625    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:18.871429    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:18.871442    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:18.910739    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:18.910751    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:18.914832    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:18.914841    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:18.948758    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:18.948771    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:21.464953    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:20.487366    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:20.487541    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:20.503417    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:20.503502    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:20.516044    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:20.516121    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:20.527340    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:31:20.527413    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:20.537617    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:20.537686    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:20.548016    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:20.548083    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:20.558192    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:20.558263    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:20.567971    3721 logs.go:276] 0 containers: []
	W0818 12:31:20.567986    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:20.568039    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:20.578246    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:20.578262    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:20.578267    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:20.595712    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:20.595723    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:20.629372    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:20.629463    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:20.630340    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:20.630345    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:20.635332    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:20.635340    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:20.670339    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:20.670349    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:20.685065    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:20.685075    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:20.698766    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:20.698777    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:20.710191    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:20.710201    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:20.729489    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:20.729499    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:20.740801    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:20.740811    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:20.763522    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:20.763530    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:20.778500    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:20.778510    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:20.790119    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:20.790130    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:20.801942    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:20.801951    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:20.801979    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:31:20.801984    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:20.801987    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:20.801992    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:20.801994    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:26.467230    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:26.467499    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:26.492035    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:26.492156    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:26.513604    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:26.513684    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:26.525800    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:26.525869    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:26.537340    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:26.537415    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:26.547550    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:26.547613    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:26.566672    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:26.566744    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:26.577000    3866 logs.go:276] 0 containers: []
	W0818 12:31:26.577013    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:26.577072    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:26.588020    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:26.588040    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:26.588047    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:26.592298    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:26.592307    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:26.604190    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:26.604204    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:26.616359    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:26.616374    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:26.654470    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:26.654477    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:26.688655    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:26.688666    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:26.703609    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:26.703620    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:26.735599    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:26.735609    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:26.749491    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:26.749501    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:26.761566    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:26.761579    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:26.774354    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:26.774365    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:26.786144    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:26.786155    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:26.800206    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:26.800215    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:26.811695    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:26.811706    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:26.823207    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:26.823218    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:26.838592    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:26.838605    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:26.856402    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:26.856412    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:29.381720    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:30.806236    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:34.384040    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:34.384152    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:34.395561    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:34.395631    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:34.406478    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:34.406553    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:34.417498    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:34.417567    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:34.427896    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:34.427968    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:34.438518    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:34.438587    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:34.454668    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:34.454743    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:34.465107    3866 logs.go:276] 0 containers: []
	W0818 12:31:34.465120    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:34.465176    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:34.479631    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:34.479651    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:34.479656    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:34.496560    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:34.496570    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:34.509726    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:34.509738    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:34.521361    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:34.521374    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:34.557305    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:34.557318    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:34.575046    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:34.575059    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:34.589301    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:34.589314    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:34.601373    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:34.601385    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:34.646072    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:34.646082    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:34.657750    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:34.657762    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:34.673419    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:34.673431    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:34.688275    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:34.688289    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:34.700054    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:34.700066    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:34.724654    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:34.724662    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:34.743232    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:34.743243    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:34.747635    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:34.747644    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:34.771812    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:34.771827    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:37.283966    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:35.808899    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:35.809078    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:35.830567    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:35.830653    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:35.841734    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:35.841805    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:35.852543    3721 logs.go:276] 2 containers: [a143f9cf22f7 d277ab82a17b]
	I0818 12:31:35.852616    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:35.862694    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:35.862762    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:35.872884    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:35.872956    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:35.883149    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:35.883221    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:35.893257    3721 logs.go:276] 0 containers: []
	W0818 12:31:35.893269    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:35.893331    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:35.903363    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:35.903378    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:35.903383    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:35.907861    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:35.907868    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:35.919709    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:35.919719    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:35.935744    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:35.935754    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:35.947770    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:35.947783    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:35.965608    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:35.965617    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:35.997851    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:35.997944    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:35.998845    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:35.998851    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:36.034377    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:36.034388    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:36.048835    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:36.048847    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:36.062555    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:36.062568    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:36.077238    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:36.077252    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:36.088619    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:36.088630    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:36.111393    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:36.111401    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:36.122799    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:36.122810    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:36.122838    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:31:36.122843    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:36.122847    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:36.122851    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:36.122853    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:42.286448    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:42.286715    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:42.311731    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:42.311852    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:42.327806    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:42.327886    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:42.340909    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:42.340979    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:42.352389    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:42.352449    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:42.363029    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:42.363099    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:42.373774    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:42.373840    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:42.384499    3866 logs.go:276] 0 containers: []
	W0818 12:31:42.384510    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:42.384573    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:42.396259    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:42.396282    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:42.396289    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:42.401601    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:42.401610    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:42.415566    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:42.415581    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:42.432867    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:42.432881    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:42.453719    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:42.453731    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:42.468015    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:42.468030    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:42.480289    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:42.480305    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:42.517824    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:42.517831    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:42.528849    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:42.528860    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:42.555225    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:42.555240    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:42.566451    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:42.566465    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:42.589408    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:42.589423    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:42.604452    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:42.604462    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:42.615776    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:42.615786    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:42.631245    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:42.631259    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:42.643306    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:42.643319    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:42.665279    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:42.665285    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:45.201446    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:46.126888    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:50.204124    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:50.204284    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:50.218387    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:50.218474    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:50.229851    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:50.229919    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:50.240302    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:50.240369    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:50.250397    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:50.250480    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:50.260723    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:50.260796    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:50.271643    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:50.271712    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:50.290989    3866 logs.go:276] 0 containers: []
	W0818 12:31:50.291003    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:50.291064    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:50.301305    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:50.301323    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:50.301329    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:50.337086    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:50.337100    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:50.351526    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:50.351539    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:50.363555    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:50.363567    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:50.367919    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:50.367928    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:50.394047    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:50.394059    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:50.407804    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:50.407813    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:50.419125    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:50.419137    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:50.430204    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:50.430215    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:50.452587    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:50.452598    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:50.466342    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:50.466352    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:50.477772    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:50.477784    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:50.489729    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:50.489739    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:50.501614    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:50.501625    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:50.513108    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:50.513119    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:50.552049    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:50.552056    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:50.567025    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:50.567036    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:53.089018    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:51.129148    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:51.129432    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:51.162024    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:31:51.162152    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:51.180349    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:31:51.180447    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:51.194991    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:31:51.195067    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:51.212116    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:31:51.212182    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:51.223094    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:31:51.223159    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:51.233995    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:31:51.234069    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:51.244221    3721 logs.go:276] 0 containers: []
	W0818 12:31:51.244233    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:51.244288    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:51.254832    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:31:51.254851    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:31:51.254857    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:31:51.266874    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:31:51.266885    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:31:51.281875    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:31:51.281887    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:31:51.299553    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:51.299563    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:31:51.331981    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:51.332073    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:51.333002    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:31:51.333010    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:31:51.348601    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:51.348610    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:51.372422    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:31:51.372431    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:51.383937    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:51.383948    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:51.388502    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:31:51.388510    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:31:51.400543    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:31:51.400556    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:31:51.412053    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:31:51.412067    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:31:51.423454    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:31:51.423464    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:31:51.437280    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:31:51.437291    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:31:51.451295    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:31:51.451305    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:31:51.463204    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:51.463218    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:51.497229    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:51.497242    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:31:51.497273    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:31:51.497279    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:31:51.497283    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:31:51.497286    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:31:51.497290    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:31:58.091502    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:58.091730    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:58.116829    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:58.116935    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:58.131834    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:58.131899    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:58.145378    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:58.145456    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:58.156695    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:58.156767    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:58.170302    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:58.170368    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:58.181030    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:58.181104    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:58.191741    3866 logs.go:276] 0 containers: []
	W0818 12:31:58.191753    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:58.191814    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:58.202260    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:58.202278    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:58.202283    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:58.227423    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:58.227434    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:58.238928    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:58.238939    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:58.250632    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:58.250645    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:58.264768    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:58.264778    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:58.282757    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:58.282768    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:58.320978    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:58.320988    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:58.325287    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:58.325296    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:58.339532    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:58.339541    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:58.353683    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:58.353695    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:58.366310    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:58.366320    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:58.382411    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:58.382421    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:58.394377    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:58.394389    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:58.407851    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:58.407864    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:58.449475    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:58.449489    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:58.462520    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:58.462531    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:58.474908    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:58.474919    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:00.997980    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:01.501326    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:06.000274    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:06.000497    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:06.023504    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:06.023601    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:06.038068    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:06.038145    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:06.050500    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:06.050569    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:06.061198    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:06.061268    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:06.071243    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:06.071312    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:06.081552    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:06.081623    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:06.091471    3866 logs.go:276] 0 containers: []
	W0818 12:32:06.091481    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:06.091533    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:06.101874    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:06.101892    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:06.101897    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:06.116508    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:06.116521    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:06.130665    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:06.130675    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:06.154323    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:06.154335    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:06.165988    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:06.166001    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:06.203791    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:06.203801    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:06.208202    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:06.208207    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:06.245825    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:06.245837    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:06.260228    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:06.260238    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:06.271478    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:06.271492    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:06.289539    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:06.289552    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:06.306354    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:06.306365    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:06.330876    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:06.330890    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:06.345704    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:06.345715    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:06.358289    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:06.358300    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:06.373760    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:06.373772    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:06.385869    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:06.385880    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:06.503534    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:06.503624    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:06.514375    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:06.514446    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:06.525200    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:06.525274    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:06.535887    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:06.535958    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:06.546491    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:06.546555    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:06.558100    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:06.558196    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:06.569044    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:06.569104    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:06.579113    3721 logs.go:276] 0 containers: []
	W0818 12:32:06.579125    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:06.579190    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:06.590236    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:06.590257    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:06.590262    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:06.602615    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:06.602625    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:06.614570    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:06.614580    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:06.619167    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:06.619173    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:06.630977    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:06.630987    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:06.651690    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:06.651699    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:06.686569    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:06.686582    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:06.701066    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:06.701076    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:06.712361    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:06.712372    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:06.727908    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:06.727920    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:06.762059    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:06.762152    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:06.763024    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:06.763030    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:06.777202    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:06.777212    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:06.800763    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:06.800776    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:06.812198    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:06.812213    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:06.825935    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:06.825947    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:06.837710    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:06.837720    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:06.837751    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:32:06.837756    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:06.837760    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:06.837768    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:06.837770    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:08.900957    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:13.901277    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:13.901435    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:13.917597    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:13.917676    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:13.930841    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:13.930910    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:13.941443    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:13.941513    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:13.952326    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:13.952394    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:13.963055    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:13.963124    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:13.973972    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:13.974044    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:13.984579    3866 logs.go:276] 0 containers: []
	W0818 12:32:13.984595    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:13.984647    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:13.998826    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:13.998842    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:13.998852    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:14.011045    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:14.011060    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:14.023747    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:14.023761    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:14.035118    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:14.035131    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:14.070796    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:14.070809    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:14.085192    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:14.085203    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:14.097111    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:14.097122    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:14.108598    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:14.108608    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:14.147355    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:14.147367    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:14.161594    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:14.161606    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:14.186482    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:14.186492    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:14.208247    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:14.208256    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:14.220308    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:14.220319    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:14.224864    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:14.224872    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:14.238900    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:14.238914    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:14.251066    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:14.251078    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:14.266990    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:14.267001    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:16.786869    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:16.841794    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:21.789215    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:21.789689    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:21.860420    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:21.860467    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:21.890284    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:21.890421    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:21.909076    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:21.909154    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:21.922044    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:21.922118    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:21.937860    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:21.937931    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:21.949108    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:21.949184    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:21.959764    3866 logs.go:276] 0 containers: []
	W0818 12:32:21.959774    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:21.959830    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:21.971485    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:21.971504    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:21.971509    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:21.975915    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:21.975927    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:21.988438    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:21.988455    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:22.001239    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:22.001252    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:22.039244    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:22.039257    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:22.051676    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:22.051688    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:22.071603    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:22.071615    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:22.084855    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:22.084866    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:22.109675    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:22.109686    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:22.122907    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:22.122919    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:22.163587    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:22.163596    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:22.179220    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:22.179231    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:22.198325    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:22.198338    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:22.211777    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:22.211790    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:22.228733    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:22.228743    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:22.240568    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:22.240579    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:22.261314    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:22.261324    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:21.843093    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:21.843266    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:21.860255    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:21.860337    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:21.875324    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:21.875404    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:21.887320    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:21.887390    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:21.899334    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:21.899404    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:21.910748    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:21.910807    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:21.922853    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:21.922889    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:21.933892    3721 logs.go:276] 0 containers: []
	W0818 12:32:21.933904    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:21.933968    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:21.945545    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:21.945569    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:21.945576    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:21.958717    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:21.958731    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:21.971595    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:21.971604    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:21.993105    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:21.993115    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:22.020306    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:22.020332    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:22.035735    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:22.035750    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:22.048002    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:22.048013    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:22.065095    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:22.065107    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:22.099212    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:22.099307    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:22.100209    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:22.100216    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:22.104574    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:22.104584    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:22.117539    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:22.117552    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:22.130933    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:22.130946    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:22.149252    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:22.149262    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:22.161544    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:22.161555    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:22.207784    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:22.207797    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:22.223833    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:22.223845    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:22.223872    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:32:22.223876    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:22.223902    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:22.223908    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:22.223911    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:24.788054    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:29.790487    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:29.790680    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:29.806894    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:29.806979    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:29.819391    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:29.819477    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:29.830172    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:29.830246    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:29.840819    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:29.840895    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:29.852790    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:29.852858    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:29.863088    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:29.863161    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:29.873695    3866 logs.go:276] 0 containers: []
	W0818 12:32:29.873706    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:29.873758    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:29.893736    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:29.893755    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:29.893761    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:29.906881    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:29.906895    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:29.918064    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:29.918077    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:29.935626    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:29.935641    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:29.950395    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:29.950405    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:29.963237    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:29.963247    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:29.986423    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:29.986431    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:30.021986    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:30.021996    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:30.035852    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:30.035863    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:30.050445    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:30.050458    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:30.062098    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:30.062113    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:30.077314    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:30.077327    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:30.088502    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:30.088514    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:30.100193    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:30.100206    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:30.138230    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:30.138245    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:30.142508    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:30.142523    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:30.170263    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:30.170277    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:32.689472    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:32.227930    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:37.690475    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:37.690625    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:37.707033    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:37.707097    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:37.719903    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:37.719976    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:37.730674    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:37.730745    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:37.741295    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:37.741368    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:37.755085    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:37.755150    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:37.765322    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:37.765392    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:37.776202    3866 logs.go:276] 0 containers: []
	W0818 12:32:37.776217    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:37.776281    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:37.786674    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:37.786695    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:37.786700    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:37.811010    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:37.811025    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:37.830741    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:37.830753    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:37.848006    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:37.848016    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:37.859591    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:37.859603    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:37.894198    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:37.894209    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:37.908031    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:37.908042    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:37.927841    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:37.927852    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:37.950508    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:37.950521    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:37.965354    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:37.965370    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:37.969836    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:37.969843    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:37.984261    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:37.984272    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:38.006979    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:38.006994    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:38.044302    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:38.044312    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:38.058691    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:38.058705    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:38.070603    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:38.070615    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:38.082460    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:38.082469    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:37.230141    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:37.230311    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:37.251767    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:37.251851    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:37.263449    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:37.263521    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:37.274388    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:37.274469    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:37.284605    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:37.284672    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:37.295020    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:37.295087    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:37.305451    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:37.305518    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:37.315791    3721 logs.go:276] 0 containers: []
	W0818 12:32:37.315803    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:37.315857    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:37.326141    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:37.326166    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:37.326171    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:37.338030    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:37.338041    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:37.349504    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:37.349515    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:37.375237    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:37.375246    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:37.411555    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:37.411565    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:37.429307    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:37.429318    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:37.463579    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:37.463678    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:37.464580    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:37.464587    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:37.477493    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:37.477503    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:37.492206    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:37.492217    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:37.504483    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:37.504493    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:37.509017    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:37.509027    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:37.523027    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:37.523037    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:37.537104    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:37.537113    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:37.549610    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:37.549620    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:37.561205    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:37.561215    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:37.572680    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:37.572690    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:37.572718    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:32:37.572722    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:37.572725    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:37.572728    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:37.572731    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:40.596230    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:45.598573    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:45.598791    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:45.618252    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:45.618343    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:45.633979    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:45.634058    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:45.645309    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:45.645372    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:45.655757    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:45.655826    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:45.665967    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:45.666030    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:45.676320    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:45.676383    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:45.686783    3866 logs.go:276] 0 containers: []
	W0818 12:32:45.686793    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:45.686858    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:45.696722    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:45.696740    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:45.696746    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:45.709245    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:45.709256    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:45.721650    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:45.721661    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:45.759588    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:45.759597    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:45.763838    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:45.763847    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:45.777539    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:45.777550    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:45.801980    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:45.801993    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:45.816202    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:45.816215    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:45.828423    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:45.828434    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:45.840045    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:45.840056    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:45.857008    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:45.857019    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:45.868707    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:45.868717    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:45.891365    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:45.891376    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:45.926394    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:45.926409    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:45.941357    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:45.941367    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:45.952511    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:45.952522    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:45.968638    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:45.968651    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:48.486111    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:47.576793    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:53.488285    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:53.488485    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:53.509175    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:53.509274    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:53.523740    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:53.523809    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:52.578054    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:52.578187    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:52.589738    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:32:52.589812    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:52.600072    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:32:52.600149    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:52.610574    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:32:52.610648    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:52.621176    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:32:52.621243    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:52.631495    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:32:52.631568    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:52.642242    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:32:52.642307    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:52.655552    3721 logs.go:276] 0 containers: []
	W0818 12:32:52.655562    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:52.655617    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:52.665557    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:32:52.665573    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:32:52.665578    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:52.677467    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:32:52.677478    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:32:52.692224    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:32:52.692236    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:32:52.703885    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:32:52.703898    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:32:52.715075    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:32:52.715084    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:32:52.730611    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:32:52.730623    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:32:52.745697    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:32:52.745707    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:32:52.757375    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:52.757385    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:52.762219    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:32:52.762224    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:32:52.773592    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:32:52.773604    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:32:52.785411    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:52.785425    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:52.808914    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:52.808923    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:32:52.841726    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:52.841819    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:52.842762    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:32:52.842770    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:32:52.857236    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:52.857249    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:52.892038    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:32:52.892048    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:32:52.912279    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:52.912288    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:32:52.912319    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:32:52.912323    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:32:52.912327    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:32:52.912331    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:32:52.912333    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:32:53.539563    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:53.539625    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:53.549887    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:53.549954    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:53.560086    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:53.560150    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:53.571095    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:53.571161    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:53.581513    3866 logs.go:276] 0 containers: []
	W0818 12:32:53.581525    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:53.581578    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:53.592770    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:53.592787    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:53.592793    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:53.607353    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:53.607362    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:53.623839    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:53.623850    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:53.641226    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:53.641236    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:53.662995    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:53.663003    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:53.666785    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:53.666794    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:53.678687    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:53.678697    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:53.691130    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:53.691140    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:53.729271    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:53.729280    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:53.744603    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:53.744613    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:53.758946    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:53.758956    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:53.770148    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:53.770161    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:53.804470    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:53.804484    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:53.834113    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:53.834124    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:53.853471    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:53.853482    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:53.865312    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:53.865323    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:53.878621    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:53.878632    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:56.392981    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:01.395166    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:01.395237    3866 kubeadm.go:597] duration metric: took 4m3.716019458s to restartPrimaryControlPlane
	W0818 12:33:01.395281    3866 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 12:33:01.395295    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0818 12:33:02.367713    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:33:02.372836    3866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 12:33:02.375888    3866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 12:33:02.378667    3866 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 12:33:02.378674    3866 kubeadm.go:157] found existing configuration files:
	
	I0818 12:33:02.378700    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf
	I0818 12:33:02.381169    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 12:33:02.381191    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 12:33:02.384137    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf
	I0818 12:33:02.386851    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 12:33:02.386874    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 12:33:02.389852    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf
	I0818 12:33:02.392989    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 12:33:02.393023    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 12:33:02.395974    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf
	I0818 12:33:02.398493    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 12:33:02.398515    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 12:33:02.400986    3866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 12:33:02.419068    3866 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0818 12:33:02.419109    3866 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 12:33:02.468015    3866 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 12:33:02.468086    3866 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 12:33:02.468144    3866 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 12:33:02.519467    3866 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 12:33:02.530106    3866 out.go:235]   - Generating certificates and keys ...
	I0818 12:33:02.530148    3866 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 12:33:02.530180    3866 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 12:33:02.530222    3866 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 12:33:02.530251    3866 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 12:33:02.530288    3866 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 12:33:02.530324    3866 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 12:33:02.530364    3866 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 12:33:02.530398    3866 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 12:33:02.530456    3866 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 12:33:02.530493    3866 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 12:33:02.530514    3866 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 12:33:02.530545    3866 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 12:33:02.724821    3866 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 12:33:02.959603    3866 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 12:33:02.991015    3866 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 12:33:03.097811    3866 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 12:33:03.128342    3866 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 12:33:03.129382    3866 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 12:33:03.129511    3866 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 12:33:03.217333    3866 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 12:33:03.221526    3866 out.go:235]   - Booting up control plane ...
	I0818 12:33:03.221578    3866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 12:33:03.221617    3866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 12:33:03.221655    3866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 12:33:03.221698    3866 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 12:33:03.221773    3866 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 12:33:02.916364    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:07.220529    3866 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002115 seconds
	I0818 12:33:07.220596    3866 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 12:33:07.224554    3866 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 12:33:07.737785    3866 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 12:33:07.738031    3866 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-521000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 12:33:08.241605    3866 kubeadm.go:310] [bootstrap-token] Using token: yvloaz.dit76wmmf7nv51fe
	I0818 12:33:08.245015    3866 out.go:235]   - Configuring RBAC rules ...
	I0818 12:33:08.245066    3866 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 12:33:08.245102    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 12:33:08.248837    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 12:33:08.250081    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 12:33:08.251083    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 12:33:08.252326    3866 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 12:33:08.256188    3866 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 12:33:08.442510    3866 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 12:33:08.645527    3866 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 12:33:08.646040    3866 kubeadm.go:310] 
	I0818 12:33:08.646070    3866 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 12:33:08.646074    3866 kubeadm.go:310] 
	I0818 12:33:08.646106    3866 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 12:33:08.646109    3866 kubeadm.go:310] 
	I0818 12:33:08.646122    3866 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 12:33:08.646147    3866 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 12:33:08.646175    3866 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 12:33:08.646178    3866 kubeadm.go:310] 
	I0818 12:33:08.646203    3866 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 12:33:08.646206    3866 kubeadm.go:310] 
	I0818 12:33:08.646238    3866 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 12:33:08.646246    3866 kubeadm.go:310] 
	I0818 12:33:08.646278    3866 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 12:33:08.646317    3866 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 12:33:08.646366    3866 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 12:33:08.646371    3866 kubeadm.go:310] 
	I0818 12:33:08.646422    3866 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 12:33:08.646464    3866 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 12:33:08.646474    3866 kubeadm.go:310] 
	I0818 12:33:08.646523    3866 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yvloaz.dit76wmmf7nv51fe \
	I0818 12:33:08.646577    3866 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 \
	I0818 12:33:08.646587    3866 kubeadm.go:310] 	--control-plane 
	I0818 12:33:08.646591    3866 kubeadm.go:310] 
	I0818 12:33:08.646635    3866 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 12:33:08.646638    3866 kubeadm.go:310] 
	I0818 12:33:08.646689    3866 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yvloaz.dit76wmmf7nv51fe \
	I0818 12:33:08.646752    3866 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 
	I0818 12:33:08.646913    3866 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 12:33:08.646922    3866 cni.go:84] Creating CNI manager for ""
	I0818 12:33:08.646930    3866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:33:08.651026    3866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 12:33:08.655031    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 12:33:08.657952    3866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 12:33:08.663371    3866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 12:33:08.663425    3866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 12:33:08.663481    3866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-521000 minikube.k8s.io/updated_at=2024_08_18T12_33_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=stopped-upgrade-521000 minikube.k8s.io/primary=true
	I0818 12:33:08.707378    3866 ops.go:34] apiserver oom_adj: -16
	I0818 12:33:08.707378    3866 kubeadm.go:1113] duration metric: took 43.996417ms to wait for elevateKubeSystemPrivileges
	I0818 12:33:08.707495    3866 kubeadm.go:394] duration metric: took 4m11.042339959s to StartCluster
	I0818 12:33:08.707508    3866 settings.go:142] acquiring lock: {Name:mk5a561ec5cb84c336df08f67624cd54d50bdb17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:33:08.707599    3866 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:33:08.708001    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/kubeconfig: {Name:mked914f07b3885fd33f9b87dfa58e56ae6bca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:33:08.708220    3866 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:33:08.708308    3866 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:33:08.708248    3866 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:33:08.708360    3866 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-521000"
	I0818 12:33:08.708377    3866 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-521000"
	W0818 12:33:08.708381    3866 addons.go:243] addon storage-provisioner should already be in state true
	I0818 12:33:08.708388    3866 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-521000"
	I0818 12:33:08.708392    3866 host.go:66] Checking if "stopped-upgrade-521000" exists ...
	I0818 12:33:08.708401    3866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-521000"
	I0818 12:33:08.709387    3866 kapi.go:59] client config for stopped-upgrade-521000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fbd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:33:08.709541    3866 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-521000"
	W0818 12:33:08.709546    3866 addons.go:243] addon default-storageclass should already be in state true
	I0818 12:33:08.709552    3866 host.go:66] Checking if "stopped-upgrade-521000" exists ...
	I0818 12:33:08.710971    3866 out.go:177] * Verifying Kubernetes components...
	I0818 12:33:08.711292    3866 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 12:33:08.714126    3866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 12:33:08.714135    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:33:08.717934    3866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:33:07.918667    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:07.918889    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:33:07.944253    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:33:07.944383    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:33:07.962564    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:33:07.962650    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:33:07.975855    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:33:07.975934    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:33:07.987400    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:33:07.987468    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:33:07.998157    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:33:07.998221    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:33:08.009310    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:33:08.009381    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:33:08.019783    3721 logs.go:276] 0 containers: []
	W0818 12:33:08.019795    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:33:08.019853    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:33:08.030708    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:33:08.030725    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:33:08.030730    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:33:08.065742    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:33:08.065754    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:33:08.078037    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:33:08.078050    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:33:08.091231    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:33:08.091242    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:33:08.113650    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:33:08.113660    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:33:08.127766    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:33:08.127775    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:33:08.141755    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:33:08.141769    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:33:08.166412    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:33:08.166423    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:33:08.178073    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:33:08.178084    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:33:08.211779    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:08.211873    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:08.212802    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:33:08.212808    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:33:08.225117    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:33:08.225128    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:33:08.243807    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:33:08.243819    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:33:08.248707    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:33:08.248717    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:33:08.264271    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:33:08.264282    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:33:08.276098    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:33:08.276110    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:33:08.291918    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:08.291932    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:33:08.291959    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:33:08.291964    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:08.291967    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:08.291971    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:08.291974    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:33:08.721951    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:33:08.724928    3866 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:33:08.724934    3866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 12:33:08.724941    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:33:08.795658    3866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:33:08.800924    3866 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:33:08.800966    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:33:08.807574    3866 api_server.go:72] duration metric: took 99.343292ms to wait for apiserver process to appear ...
	I0818 12:33:08.807583    3866 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:33:08.807591    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:08.810988    3866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:33:08.852747    3866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 12:33:09.195038    3866 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 12:33:09.195050    3866 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 12:33:13.809697    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:13.809744    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:18.296035    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:18.810111    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:18.810133    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:23.298370    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:23.298530    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:33:23.316168    3721 logs.go:276] 1 containers: [abb325a21fe2]
	I0818 12:33:23.316267    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:33:23.329914    3721 logs.go:276] 1 containers: [8f2010cc8d3f]
	I0818 12:33:23.330019    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:33:23.341410    3721 logs.go:276] 4 containers: [250ecb5c1a5a a6858da4bd1c a143f9cf22f7 d277ab82a17b]
	I0818 12:33:23.341481    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:33:23.354717    3721 logs.go:276] 1 containers: [45a9965db2f7]
	I0818 12:33:23.354787    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:33:23.365326    3721 logs.go:276] 1 containers: [a7f0b6f03bc2]
	I0818 12:33:23.365394    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:33:23.376174    3721 logs.go:276] 1 containers: [1f4199745101]
	I0818 12:33:23.376241    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:33:23.393392    3721 logs.go:276] 0 containers: []
	W0818 12:33:23.393403    3721 logs.go:278] No container was found matching "kindnet"
	I0818 12:33:23.393467    3721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:33:23.403502    3721 logs.go:276] 1 containers: [55ab978b4e96]
	I0818 12:33:23.403520    3721 logs.go:123] Gathering logs for coredns [250ecb5c1a5a] ...
	I0818 12:33:23.403526    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 250ecb5c1a5a"
	I0818 12:33:23.414949    3721 logs.go:123] Gathering logs for kube-scheduler [45a9965db2f7] ...
	I0818 12:33:23.414962    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a9965db2f7"
	I0818 12:33:23.430167    3721 logs.go:123] Gathering logs for kube-proxy [a7f0b6f03bc2] ...
	I0818 12:33:23.430177    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7f0b6f03bc2"
	I0818 12:33:23.441953    3721 logs.go:123] Gathering logs for dmesg ...
	I0818 12:33:23.441963    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:33:23.447259    3721 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:33:23.447273    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:33:23.500604    3721 logs.go:123] Gathering logs for kube-apiserver [abb325a21fe2] ...
	I0818 12:33:23.500630    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 abb325a21fe2"
	I0818 12:33:23.515159    3721 logs.go:123] Gathering logs for coredns [a6858da4bd1c] ...
	I0818 12:33:23.515172    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6858da4bd1c"
	I0818 12:33:23.527274    3721 logs.go:123] Gathering logs for coredns [a143f9cf22f7] ...
	I0818 12:33:23.527285    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a143f9cf22f7"
	I0818 12:33:23.539991    3721 logs.go:123] Gathering logs for coredns [d277ab82a17b] ...
	I0818 12:33:23.540003    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d277ab82a17b"
	I0818 12:33:23.551544    3721 logs.go:123] Gathering logs for Docker ...
	I0818 12:33:23.551562    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:33:23.574473    3721 logs.go:123] Gathering logs for storage-provisioner [55ab978b4e96] ...
	I0818 12:33:23.574481    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55ab978b4e96"
	I0818 12:33:23.586252    3721 logs.go:123] Gathering logs for container status ...
	I0818 12:33:23.586261    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:33:23.597728    3721 logs.go:123] Gathering logs for kubelet ...
	I0818 12:33:23.597738    3721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 12:33:23.632616    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:23.632709    3721 logs.go:138] Found kubelet problem: Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:23.633638    3721 logs.go:123] Gathering logs for etcd [8f2010cc8d3f] ...
	I0818 12:33:23.633648    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f2010cc8d3f"
	I0818 12:33:23.649596    3721 logs.go:123] Gathering logs for kube-controller-manager [1f4199745101] ...
	I0818 12:33:23.649614    3721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f4199745101"
	I0818 12:33:23.676222    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:23.676231    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 12:33:23.676260    3721 out.go:270] X Problems detected in kubelet:
	W0818 12:33:23.676266    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	W0818 12:33:23.676269    3721 out.go:270]   Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	I0818 12:33:23.676272    3721 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:23.676274    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:33:23.810586    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:23.810629    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:28.805155    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:28.805200    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:33.669869    3721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:33.801259    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:33.801287    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:38.668798    3721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:38.672505    3721 out.go:201] 
	W0818 12:33:38.676521    3721 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0818 12:33:38.676528    3721 out.go:270] * 
	W0818 12:33:38.676947    3721 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:33:38.691467    3721 out.go:201] 
	I0818 12:33:38.799166    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:38.799189    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0818 12:33:39.182924    3866 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0818 12:33:39.189141    3866 out.go:177] * Enabled addons: storage-provisioner
	I0818 12:33:39.201057    3866 addons.go:510] duration metric: took 30.507204875s for enable addons: enabled=[storage-provisioner]
	I0818 12:33:43.797927    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:43.797978    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:48.797832    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:48.797861    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Sun 2024-08-18 19:24:43 UTC, ends at Sun 2024-08-18 19:33:54 UTC. --
	Aug 18 19:33:35 running-upgrade-363000 dockerd[3339]: time="2024-08-18T19:33:35.404368253Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/905d266cce0b841768cfe263743419925ebfd27ff692f18d794c69afb8461956 pid=17949 runtime=io.containerd.runc.v2
	Aug 18 19:33:35 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:35Z" level=error msg="ContainerStats resp: {0x400079d540 linux}"
	Aug 18 19:33:35 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:35Z" level=error msg="ContainerStats resp: {0x4000359b00 linux}"
	Aug 18 19:33:36 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:36Z" level=error msg="ContainerStats resp: {0x40007d0040 linux}"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=error msg="ContainerStats resp: {0x400095b640 linux}"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=error msg="ContainerStats resp: {0x400095b780 linux}"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=error msg="ContainerStats resp: {0x400083c000 linux}"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=error msg="ContainerStats resp: {0x400083c500 linux}"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=error msg="ContainerStats resp: {0x40007d1f80 linux}"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=error msg="ContainerStats resp: {0x400087e440 linux}"
	Aug 18 19:33:37 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:37Z" level=error msg="ContainerStats resp: {0x400083d100 linux}"
	Aug 18 19:33:42 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 18 19:33:47 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 18 19:33:47 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:47Z" level=error msg="ContainerStats resp: {0x4000931040 linux}"
	Aug 18 19:33:47 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:47Z" level=error msg="ContainerStats resp: {0x40007d1f00 linux}"
	Aug 18 19:33:48 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:48Z" level=error msg="ContainerStats resp: {0x400087f680 linux}"
	Aug 18 19:33:49 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:49Z" level=error msg="ContainerStats resp: {0x40004104c0 linux}"
	Aug 18 19:33:49 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:49Z" level=error msg="ContainerStats resp: {0x4000410840 linux}"
	Aug 18 19:33:49 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:49Z" level=error msg="ContainerStats resp: {0x4000410a00 linux}"
	Aug 18 19:33:49 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:49Z" level=error msg="ContainerStats resp: {0x4000410e00 linux}"
	Aug 18 19:33:49 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:49Z" level=error msg="ContainerStats resp: {0x400083d9c0 linux}"
	Aug 18 19:33:49 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:49Z" level=error msg="ContainerStats resp: {0x40004117c0 linux}"
	Aug 18 19:33:49 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:49Z" level=error msg="ContainerStats resp: {0x40004105c0 linux}"
	Aug 18 19:33:52 running-upgrade-363000 cri-dockerd[3173]: time="2024-08-18T19:33:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	905d266cce0b8       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   924468d92daa5
	0cac0b92eef90       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   04ccd533a44ce
	250ecb5c1a5a9       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   924468d92daa5
	a6858da4bd1c1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   04ccd533a44ce
	a7f0b6f03bc21       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   cc0fdc190964d
	55ab978b4e962       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   475525f8a53a9
	45a9965db2f7e       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   5446dcdcb327b
	abb325a21fe2a       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   bca073e700fc3
	1f41997451017       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   426bed8ab1923
	8f2010cc8d3f8       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   b794e53d7c258
	
	
	==> coredns [0cac0b92eef9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7667433413190684889.2200192519157539240. HINFO: read udp 10.244.0.2:50944->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7667433413190684889.2200192519157539240. HINFO: read udp 10.244.0.2:42891->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7667433413190684889.2200192519157539240. HINFO: read udp 10.244.0.2:51621->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7667433413190684889.2200192519157539240. HINFO: read udp 10.244.0.2:55784->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7667433413190684889.2200192519157539240. HINFO: read udp 10.244.0.2:52329->10.0.2.3:53: i/o timeout
	
	
	==> coredns [250ecb5c1a5a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:55540->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:44984->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:40045->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:58306->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:58299->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:44517->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:58596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:39707->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:60181->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1615913674826824213.6659068050956388431. HINFO: read udp 10.244.0.3:51152->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [905d266cce0b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6030874716344840052.5343706765777175204. HINFO: read udp 10.244.0.3:50820->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6030874716344840052.5343706765777175204. HINFO: read udp 10.244.0.3:55075->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6030874716344840052.5343706765777175204. HINFO: read udp 10.244.0.3:36722->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6030874716344840052.5343706765777175204. HINFO: read udp 10.244.0.3:36272->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6030874716344840052.5343706765777175204. HINFO: read udp 10.244.0.3:48235->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a6858da4bd1c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:37089->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:41191->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:39742->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:35107->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:54863->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:55541->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:45626->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:48012->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:44771->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5744160893824971027.1945357090954443294. HINFO: read udp 10.244.0.2:45174->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-363000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-363000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=running-upgrade-363000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T12_29_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:29:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-363000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:29:34 +0000   Sun, 18 Aug 2024 19:29:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:29:34 +0000   Sun, 18 Aug 2024 19:29:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:29:34 +0000   Sun, 18 Aug 2024 19:29:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:29:34 +0000   Sun, 18 Aug 2024 19:29:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-363000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0d8b83d2ecc44dcb52bbba083fabc46
	  System UUID:                a0d8b83d2ecc44dcb52bbba083fabc46
	  Boot ID:                    5f6beb43-1214-4c26-aa1b-17efd43aade3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7hbvx                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 coredns-6d4b75cb6d-rsnc2                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 etcd-running-upgrade-363000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kube-apiserver-running-upgrade-363000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-363000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-jq2pk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-363000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  NodeReady                4m20s  kubelet          Node running-upgrade-363000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node running-upgrade-363000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node running-upgrade-363000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node running-upgrade-363000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m20s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-363000 event: Registered Node running-upgrade-363000 in Controller
	
	
	==> dmesg <==
	[  +1.697237] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.074456] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.079615] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.137382] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085635] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.079902] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.676373] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[Aug18 19:25] systemd-fstab-generator[1945]: Ignoring "noauto" for root device
	[  +2.804044] systemd-fstab-generator[2225]: Ignoring "noauto" for root device
	[  +0.148142] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[  +0.094092] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +0.088735] systemd-fstab-generator[2284]: Ignoring "noauto" for root device
	[  +4.506991] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.206771] systemd-fstab-generator[3129]: Ignoring "noauto" for root device
	[  +0.081655] systemd-fstab-generator[3141]: Ignoring "noauto" for root device
	[  +0.082455] systemd-fstab-generator[3152]: Ignoring "noauto" for root device
	[  +0.097821] systemd-fstab-generator[3166]: Ignoring "noauto" for root device
	[  +2.509898] systemd-fstab-generator[3324]: Ignoring "noauto" for root device
	[  +3.097606] systemd-fstab-generator[3991]: Ignoring "noauto" for root device
	[  +1.407613] systemd-fstab-generator[4345]: Ignoring "noauto" for root device
	[ +17.939517] kauditd_printk_skb: 68 callbacks suppressed
	[Aug18 19:29] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.464803] systemd-fstab-generator[12473]: Ignoring "noauto" for root device
	[  +5.628152] systemd-fstab-generator[13089]: Ignoring "noauto" for root device
	[  +0.466931] systemd-fstab-generator[13220]: Ignoring "noauto" for root device
	
	
	==> etcd [8f2010cc8d3f] <==
	{"level":"info","ts":"2024-08-18T19:29:29.564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-18T19:29:29.564Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-18T19:29:29.565Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T19:29:29.565Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T19:29:29.565Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T19:29:29.565Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-18T19:29:29.565Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-18T19:29:29.962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-18T19:29:29.963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-18T19:29:29.963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-18T19:29:29.963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-18T19:29:29.963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-18T19:29:29.963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-18T19:29:29.963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-18T19:29:29.963Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:29:29.965Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:29:29.965Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:29:29.966Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:29:29.965Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-363000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T19:29:29.966Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:29:29.975Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T19:29:29.975Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:29:29.975Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-18T19:29:29.976Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T19:29:29.977Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:33:54 up 9 min,  0 users,  load average: 0.23, 0.34, 0.19
	Linux running-upgrade-363000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [abb325a21fe2] <==
	I0818 19:29:31.301513       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0818 19:29:31.303197       1 controller.go:611] quota admission added evaluator for: namespaces
	I0818 19:29:31.348002       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0818 19:29:31.349208       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:29:31.349264       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:29:31.350523       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:29:31.353657       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0818 19:29:31.365463       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0818 19:29:32.098896       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0818 19:29:32.251016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0818 19:29:32.252385       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0818 19:29:32.252391       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0818 19:29:32.375794       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 19:29:32.386055       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0818 19:29:32.407389       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0818 19:29:32.409366       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0818 19:29:32.409708       1 controller.go:611] quota admission added evaluator for: endpoints
	I0818 19:29:32.411016       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 19:29:33.398131       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0818 19:29:34.080324       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0818 19:29:34.083350       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0818 19:29:34.103985       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0818 19:29:47.046978       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0818 19:29:47.147173       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0818 19:29:47.622169       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [1f4199745101] <==
	I0818 19:29:46.406450       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0818 19:29:46.406492       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-363000. Assuming now as a timestamp.
	I0818 19:29:46.406520       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0818 19:29:46.406549       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0818 19:29:46.406636       1 event.go:294] "Event occurred" object="running-upgrade-363000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-363000 event: Registered Node running-upgrade-363000 in Controller"
	I0818 19:29:46.407346       1 shared_informer.go:262] Caches are synced for deployment
	I0818 19:29:46.408980       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0818 19:29:46.409703       1 shared_informer.go:262] Caches are synced for stateful set
	I0818 19:29:46.411075       1 shared_informer.go:262] Caches are synced for GC
	I0818 19:29:46.413703       1 shared_informer.go:262] Caches are synced for daemon sets
	I0818 19:29:46.414225       1 shared_informer.go:262] Caches are synced for HPA
	I0818 19:29:46.415724       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0818 19:29:46.418673       1 shared_informer.go:262] Caches are synced for persistent volume
	I0818 19:29:46.419800       1 shared_informer.go:262] Caches are synced for attach detach
	I0818 19:29:46.425135       1 shared_informer.go:262] Caches are synced for ephemeral
	I0818 19:29:46.443371       1 shared_informer.go:262] Caches are synced for job
	I0818 19:29:46.443401       1 shared_informer.go:262] Caches are synced for PVC protection
	I0818 19:29:46.447507       1 shared_informer.go:262] Caches are synced for resource quota
	I0818 19:29:46.858952       1 shared_informer.go:262] Caches are synced for garbage collector
	I0818 19:29:46.942398       1 shared_informer.go:262] Caches are synced for garbage collector
	I0818 19:29:46.942412       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0818 19:29:47.047872       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0818 19:29:47.149994       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jq2pk"
	I0818 19:29:47.249486       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rsnc2"
	I0818 19:29:47.252817       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7hbvx"
	
	
	==> kube-proxy [a7f0b6f03bc2] <==
	I0818 19:29:47.611393       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0818 19:29:47.611418       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0818 19:29:47.611428       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0818 19:29:47.620175       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0818 19:29:47.620185       1 server_others.go:206] "Using iptables Proxier"
	I0818 19:29:47.620209       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0818 19:29:47.620294       1 server.go:661] "Version info" version="v1.24.1"
	I0818 19:29:47.620297       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:29:47.620536       1 config.go:317] "Starting service config controller"
	I0818 19:29:47.620972       1 config.go:226] "Starting endpoint slice config controller"
	I0818 19:29:47.620976       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0818 19:29:47.621198       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0818 19:29:47.621225       1 config.go:444] "Starting node config controller"
	I0818 19:29:47.621254       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0818 19:29:47.721931       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0818 19:29:47.721935       1 shared_informer.go:262] Caches are synced for node config
	I0818 19:29:47.721949       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [45a9965db2f7] <==
	W0818 19:29:31.302259       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:29:31.302262       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0818 19:29:31.302273       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 19:29:31.302276       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0818 19:29:31.302287       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 19:29:31.302289       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0818 19:29:31.302300       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 19:29:31.302302       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0818 19:29:31.302330       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 19:29:31.302332       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0818 19:29:31.302351       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:29:31.302353       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0818 19:29:31.302367       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 19:29:31.302369       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0818 19:29:32.162761       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0818 19:29:32.162789       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0818 19:29:32.215854       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:29:32.215867       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0818 19:29:32.229760       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 19:29:32.229781       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 19:29:32.245537       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:29:32.245557       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0818 19:29:32.254338       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 19:29:32.254432       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0818 19:29:34.796602       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sun 2024-08-18 19:24:43 UTC, ends at Sun 2024-08-18 19:33:55 UTC. --
	Aug 18 19:29:35 running-upgrade-363000 kubelet[13095]: E0818 19:29:35.914632   13095 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-363000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-363000"
	Aug 18 19:29:36 running-upgrade-363000 kubelet[13095]: E0818 19:29:36.114049   13095 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-363000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-363000"
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: I0818 19:29:46.210968   13095 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: I0818 19:29:46.211297   13095 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: I0818 19:29:46.411525   13095 topology_manager.go:200] "Topology Admit Handler"
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: I0818 19:29:46.512392   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1530e61c-888a-4748-adab-ba8be61cc962-tmp\") pod \"storage-provisioner\" (UID: \"1530e61c-888a-4748-adab-ba8be61cc962\") " pod="kube-system/storage-provisioner"
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: I0818 19:29:46.512422   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2shjn\" (UniqueName: \"kubernetes.io/projected/1530e61c-888a-4748-adab-ba8be61cc962-kube-api-access-2shjn\") pod \"storage-provisioner\" (UID: \"1530e61c-888a-4748-adab-ba8be61cc962\") " pod="kube-system/storage-provisioner"
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: E0818 19:29:46.616551   13095 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: E0818 19:29:46.616571   13095 projected.go:192] Error preparing data for projected volume kube-api-access-2shjn for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 18 19:29:46 running-upgrade-363000 kubelet[13095]: E0818 19:29:46.616613   13095 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/1530e61c-888a-4748-adab-ba8be61cc962-kube-api-access-2shjn podName:1530e61c-888a-4748-adab-ba8be61cc962 nodeName:}" failed. No retries permitted until 2024-08-18 19:29:47.116598097 +0000 UTC m=+13.053490595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2shjn" (UniqueName: "kubernetes.io/projected/1530e61c-888a-4748-adab-ba8be61cc962-kube-api-access-2shjn") pod "storage-provisioner" (UID: "1530e61c-888a-4748-adab-ba8be61cc962") : configmap "kube-root-ca.crt" not found
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.152727   13095 topology_manager.go:200] "Topology Admit Handler"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.253246   13095 topology_manager.go:200] "Topology Admit Handler"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: W0818 19:29:47.255710   13095 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: E0818 19:29:47.255729   13095 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-363000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-363000' and this object
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.258213   13095 topology_manager.go:200] "Topology Admit Handler"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.317880   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzk69\" (UniqueName: \"kubernetes.io/projected/2df1653d-0ea3-4aad-9fce-da3817cea0ae-kube-api-access-vzk69\") pod \"kube-proxy-jq2pk\" (UID: \"2df1653d-0ea3-4aad-9fce-da3817cea0ae\") " pod="kube-system/kube-proxy-jq2pk"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.317903   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2df1653d-0ea3-4aad-9fce-da3817cea0ae-xtables-lock\") pod \"kube-proxy-jq2pk\" (UID: \"2df1653d-0ea3-4aad-9fce-da3817cea0ae\") " pod="kube-system/kube-proxy-jq2pk"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.317914   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2df1653d-0ea3-4aad-9fce-da3817cea0ae-lib-modules\") pod \"kube-proxy-jq2pk\" (UID: \"2df1653d-0ea3-4aad-9fce-da3817cea0ae\") " pod="kube-system/kube-proxy-jq2pk"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.317924   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2df1653d-0ea3-4aad-9fce-da3817cea0ae-kube-proxy\") pod \"kube-proxy-jq2pk\" (UID: \"2df1653d-0ea3-4aad-9fce-da3817cea0ae\") " pod="kube-system/kube-proxy-jq2pk"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.418967   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93ac6352-1722-4f92-80d8-5dd3cfd9758c-config-volume\") pod \"coredns-6d4b75cb6d-7hbvx\" (UID: \"93ac6352-1722-4f92-80d8-5dd3cfd9758c\") " pod="kube-system/coredns-6d4b75cb6d-7hbvx"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.418989   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cxcw\" (UniqueName: \"kubernetes.io/projected/93ac6352-1722-4f92-80d8-5dd3cfd9758c-kube-api-access-6cxcw\") pod \"coredns-6d4b75cb6d-7hbvx\" (UID: \"93ac6352-1722-4f92-80d8-5dd3cfd9758c\") " pod="kube-system/coredns-6d4b75cb6d-7hbvx"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.419014   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zknvw\" (UniqueName: \"kubernetes.io/projected/986a0426-b65c-4d14-bf1d-b06bdcfea649-kube-api-access-zknvw\") pod \"coredns-6d4b75cb6d-rsnc2\" (UID: \"986a0426-b65c-4d14-bf1d-b06bdcfea649\") " pod="kube-system/coredns-6d4b75cb6d-rsnc2"
	Aug 18 19:29:47 running-upgrade-363000 kubelet[13095]: I0818 19:29:47.419024   13095 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/986a0426-b65c-4d14-bf1d-b06bdcfea649-config-volume\") pod \"coredns-6d4b75cb6d-rsnc2\" (UID: \"986a0426-b65c-4d14-bf1d-b06bdcfea649\") " pod="kube-system/coredns-6d4b75cb6d-rsnc2"
	Aug 18 19:33:35 running-upgrade-363000 kubelet[13095]: I0818 19:33:35.473866   13095 scope.go:110] "RemoveContainer" containerID="a143f9cf22f7257d5be2fbec6424e9aa7a3ddbbe7693bf0d9cf66f15e9ddf299"
	Aug 18 19:33:35 running-upgrade-363000 kubelet[13095]: I0818 19:33:35.486008   13095 scope.go:110] "RemoveContainer" containerID="d277ab82a17ba64574656e49b527034d389b0fdf08b4305d1c3f04602e35030b"
	
	
	==> storage-provisioner [55ab978b4e96] <==
	I0818 19:29:47.516629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 19:29:47.527130       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 19:29:47.527158       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 19:29:47.533678       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 19:29:47.533747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-363000_26bcc398-9561-440e-b560-acc1042921aa!
	I0818 19:29:47.534475       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8f9a9db1-0f1d-4da6-8ad0-633b4b6607c4", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-363000_26bcc398-9561-440e-b560-acc1042921aa became leader
	I0818 19:29:47.635371       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-363000_26bcc398-9561-440e-b560-acc1042921aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-363000 -n running-upgrade-363000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-363000 -n running-upgrade-363000: exit status 2 (15.654359167s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-363000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-363000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-363000
--- FAIL: TestRunningBinaryUpgrade (594.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-288000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-288000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.888981959s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-288000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-288000" primary control-plane node in "kubernetes-upgrade-288000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-288000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:27:16.886265    3782 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:27:16.886400    3782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:27:16.886403    3782 out.go:358] Setting ErrFile to fd 2...
	I0818 12:27:16.886405    3782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:27:16.886530    3782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:27:16.887650    3782 out.go:352] Setting JSON to false
	I0818 12:27:16.904364    3782 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3406,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:27:16.904440    3782 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:27:16.911293    3782 out.go:177] * [kubernetes-upgrade-288000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:27:16.919286    3782 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:27:16.919333    3782 notify.go:220] Checking for updates...
	I0818 12:27:16.926297    3782 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:27:16.929319    3782 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:27:16.934509    3782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:27:16.937354    3782 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:27:16.940241    3782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:27:16.943629    3782 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:27:16.943698    3782 config.go:182] Loaded profile config "running-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:27:16.943754    3782 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:27:16.948328    3782 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:27:16.955296    3782 start.go:297] selected driver: qemu2
	I0818 12:27:16.955303    3782 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:27:16.955319    3782 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:27:16.957576    3782 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:27:16.960260    3782 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:27:16.963289    3782 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 12:27:16.963325    3782 cni.go:84] Creating CNI manager for ""
	I0818 12:27:16.963332    3782 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0818 12:27:16.963365    3782 start.go:340] cluster config:
	{Name:kubernetes-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:27:16.967344    3782 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:27:16.976253    3782 out.go:177] * Starting "kubernetes-upgrade-288000" primary control-plane node in "kubernetes-upgrade-288000" cluster
	I0818 12:27:16.980307    3782 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 12:27:16.980326    3782 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0818 12:27:16.980333    3782 cache.go:56] Caching tarball of preloaded images
	I0818 12:27:16.980396    3782 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:27:16.980402    3782 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0818 12:27:16.980451    3782 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/kubernetes-upgrade-288000/config.json ...
	I0818 12:27:16.980461    3782 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/kubernetes-upgrade-288000/config.json: {Name:mkb95cd7ab27ed0dcd25b757be9d91ec7455b84d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:27:16.980652    3782 start.go:360] acquireMachinesLock for kubernetes-upgrade-288000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:27:16.980685    3782 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "kubernetes-upgrade-288000"
	I0818 12:27:16.980696    3782 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:27:16.980725    3782 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:27:16.989304    3782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:27:17.004577    3782 start.go:159] libmachine.API.Create for "kubernetes-upgrade-288000" (driver="qemu2")
	I0818 12:27:17.004615    3782 client.go:168] LocalClient.Create starting
	I0818 12:27:17.004678    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:27:17.004710    3782 main.go:141] libmachine: Decoding PEM data...
	I0818 12:27:17.004722    3782 main.go:141] libmachine: Parsing certificate...
	I0818 12:27:17.004759    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:27:17.004781    3782 main.go:141] libmachine: Decoding PEM data...
	I0818 12:27:17.004795    3782 main.go:141] libmachine: Parsing certificate...
	I0818 12:27:17.005195    3782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:27:17.166885    3782 main.go:141] libmachine: Creating SSH key...
	I0818 12:27:17.251377    3782 main.go:141] libmachine: Creating Disk image...
	I0818 12:27:17.251382    3782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:27:17.251565    3782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:17.260785    3782 main.go:141] libmachine: STDOUT: 
	I0818 12:27:17.260804    3782 main.go:141] libmachine: STDERR: 
	I0818 12:27:17.260850    3782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2 +20000M
	I0818 12:27:17.269047    3782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:27:17.269067    3782 main.go:141] libmachine: STDERR: 
	I0818 12:27:17.269079    3782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:17.269083    3782 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:27:17.269097    3782 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:27:17.269127    3782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c4:90:27:6a:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:17.270765    3782 main.go:141] libmachine: STDOUT: 
	I0818 12:27:17.270781    3782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:27:17.270806    3782 client.go:171] duration metric: took 266.190417ms to LocalClient.Create
	I0818 12:27:19.272993    3782 start.go:128] duration metric: took 2.292260791s to createHost
	I0818 12:27:19.273110    3782 start.go:83] releasing machines lock for "kubernetes-upgrade-288000", held for 2.292435834s
	W0818 12:27:19.273316    3782 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:27:19.280661    3782 out.go:177] * Deleting "kubernetes-upgrade-288000" in qemu2 ...
	W0818 12:27:19.316958    3782 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:27:19.317047    3782 start.go:729] Will try again in 5 seconds ...
	I0818 12:27:24.319243    3782 start.go:360] acquireMachinesLock for kubernetes-upgrade-288000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:27:24.319795    3782 start.go:364] duration metric: took 452.167µs to acquireMachinesLock for "kubernetes-upgrade-288000"
	I0818 12:27:24.319870    3782 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:27:24.320158    3782 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:27:24.328833    3782 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:27:24.380351    3782 start.go:159] libmachine.API.Create for "kubernetes-upgrade-288000" (driver="qemu2")
	I0818 12:27:24.380407    3782 client.go:168] LocalClient.Create starting
	I0818 12:27:24.380527    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:27:24.380595    3782 main.go:141] libmachine: Decoding PEM data...
	I0818 12:27:24.380612    3782 main.go:141] libmachine: Parsing certificate...
	I0818 12:27:24.380672    3782 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:27:24.380718    3782 main.go:141] libmachine: Decoding PEM data...
	I0818 12:27:24.380732    3782 main.go:141] libmachine: Parsing certificate...
	I0818 12:27:24.381304    3782 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:27:24.541938    3782 main.go:141] libmachine: Creating SSH key...
	I0818 12:27:24.685369    3782 main.go:141] libmachine: Creating Disk image...
	I0818 12:27:24.685378    3782 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:27:24.685623    3782 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:24.695249    3782 main.go:141] libmachine: STDOUT: 
	I0818 12:27:24.695275    3782 main.go:141] libmachine: STDERR: 
	I0818 12:27:24.695323    3782 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2 +20000M
	I0818 12:27:24.703451    3782 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:27:24.703466    3782 main.go:141] libmachine: STDERR: 
	I0818 12:27:24.703479    3782 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:24.703485    3782 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:27:24.703494    3782 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:27:24.703536    3782 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:e3:12:15:25:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:24.705200    3782 main.go:141] libmachine: STDOUT: 
	I0818 12:27:24.705222    3782 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:27:24.705236    3782 client.go:171] duration metric: took 324.824875ms to LocalClient.Create
	I0818 12:27:26.707419    3782 start.go:128] duration metric: took 2.387248584s to createHost
	I0818 12:27:26.707498    3782 start.go:83] releasing machines lock for "kubernetes-upgrade-288000", held for 2.387700083s
	W0818 12:27:26.707813    3782 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-288000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-288000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:27:26.716481    3782 out.go:201] 
	W0818 12:27:26.722541    3782 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:27:26.722566    3782 out.go:270] * 
	* 
	W0818 12:27:26.725386    3782 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:27:26.733505    3782 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-288000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-288000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-288000: (3.7283835s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-288000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-288000 status --format={{.Host}}: exit status 7 (51.337084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-288000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-288000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.180159667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-288000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-288000" primary control-plane node in "kubernetes-upgrade-288000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-288000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-288000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:27:30.558579    3816 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:27:30.558879    3816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:27:30.558883    3816 out.go:358] Setting ErrFile to fd 2...
	I0818 12:27:30.558885    3816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:27:30.559012    3816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:27:30.560010    3816 out.go:352] Setting JSON to false
	I0818 12:27:30.576352    3816 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3420,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:27:30.576423    3816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:27:30.581522    3816 out.go:177] * [kubernetes-upgrade-288000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:27:30.588455    3816 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:27:30.588499    3816 notify.go:220] Checking for updates...
	I0818 12:27:30.595478    3816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:27:30.598456    3816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:27:30.601467    3816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:27:30.604474    3816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:27:30.607376    3816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:27:30.610783    3816 config.go:182] Loaded profile config "kubernetes-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0818 12:27:30.611063    3816 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:27:30.615413    3816 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:27:30.622461    3816 start.go:297] selected driver: qemu2
	I0818 12:27:30.622469    3816 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:27:30.622539    3816 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:27:30.624786    3816 cni.go:84] Creating CNI manager for ""
	I0818 12:27:30.624803    3816 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:27:30.624835    3816 start.go:340] cluster config:
	{Name:kubernetes-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-288000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:27:30.628427    3816 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:27:30.634406    3816 out.go:177] * Starting "kubernetes-upgrade-288000" primary control-plane node in "kubernetes-upgrade-288000" cluster
	I0818 12:27:30.638453    3816 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:27:30.638470    3816 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:27:30.638479    3816 cache.go:56] Caching tarball of preloaded images
	I0818 12:27:30.638549    3816 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:27:30.638555    3816 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:27:30.638624    3816 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/kubernetes-upgrade-288000/config.json ...
	I0818 12:27:30.639119    3816 start.go:360] acquireMachinesLock for kubernetes-upgrade-288000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:27:30.639148    3816 start.go:364] duration metric: took 22.959µs to acquireMachinesLock for "kubernetes-upgrade-288000"
	I0818 12:27:30.639157    3816 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:27:30.639162    3816 fix.go:54] fixHost starting: 
	I0818 12:27:30.639287    3816 fix.go:112] recreateIfNeeded on kubernetes-upgrade-288000: state=Stopped err=<nil>
	W0818 12:27:30.639295    3816 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:27:30.646400    3816 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-288000" ...
	I0818 12:27:30.650338    3816 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:27:30.650400    3816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:e3:12:15:25:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:30.652424    3816 main.go:141] libmachine: STDOUT: 
	I0818 12:27:30.652442    3816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:27:30.652469    3816 fix.go:56] duration metric: took 13.3065ms for fixHost
	I0818 12:27:30.652473    3816 start.go:83] releasing machines lock for "kubernetes-upgrade-288000", held for 13.320834ms
	W0818 12:27:30.652480    3816 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:27:30.652524    3816 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:27:30.652528    3816 start.go:729] Will try again in 5 seconds ...
	I0818 12:27:35.654653    3816 start.go:360] acquireMachinesLock for kubernetes-upgrade-288000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:27:35.654980    3816 start.go:364] duration metric: took 259.916µs to acquireMachinesLock for "kubernetes-upgrade-288000"
	I0818 12:27:35.655126    3816 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:27:35.655139    3816 fix.go:54] fixHost starting: 
	I0818 12:27:35.655557    3816 fix.go:112] recreateIfNeeded on kubernetes-upgrade-288000: state=Stopped err=<nil>
	W0818 12:27:35.655572    3816 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:27:35.660977    3816 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-288000" ...
	I0818 12:27:35.668882    3816 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:27:35.668993    3816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:e3:12:15:25:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubernetes-upgrade-288000/disk.qcow2
	I0818 12:27:35.674091    3816 main.go:141] libmachine: STDOUT: 
	I0818 12:27:35.674131    3816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:27:35.674173    3816 fix.go:56] duration metric: took 19.03425ms for fixHost
	I0818 12:27:35.674206    3816 start.go:83] releasing machines lock for "kubernetes-upgrade-288000", held for 19.161625ms
	W0818 12:27:35.674314    3816 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-288000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-288000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:27:35.682782    3816 out.go:201] 
	W0818 12:27:35.686977    3816 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:27:35.686998    3816 out.go:270] * 
	* 
	W0818 12:27:35.688327    3816 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:27:35.698895    3816 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-288000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-288000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-288000 version --output=json: exit status 1 (52.972625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-288000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-18 12:27:35.763368 -0700 PDT m=+3019.525276251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-288000 -n kubernetes-upgrade-288000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-288000 -n kubernetes-upgrade-288000: exit status 7 (31.56875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-288000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-288000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-288000
--- FAIL: TestKubernetesUpgrade (19.02s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.41s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1218886399/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3014555009/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3549679409 start -p stopped-upgrade-521000 --memory=2200 --vm-driver=qemu2 
E0818 12:28:06.675694    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3549679409 start -p stopped-upgrade-521000 --memory=2200 --vm-driver=qemu2 : (39.379126375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3549679409 -p stopped-upgrade-521000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3549679409 -p stopped-upgrade-521000 stop: (12.11794425s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-521000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0818 12:31:08.647970    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:33:06.673067    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-521000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.480534834s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-521000" primary control-plane node in "stopped-upgrade-521000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-521000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:28:28.540516    3866 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:28:28.540664    3866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:28:28.540668    3866 out.go:358] Setting ErrFile to fd 2...
	I0818 12:28:28.540671    3866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:28:28.540836    3866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:28:28.541967    3866 out.go:352] Setting JSON to false
	I0818 12:28:28.561404    3866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3478,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:28:28.561488    3866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:28:28.565504    3866 out.go:177] * [stopped-upgrade-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:28:28.573447    3866 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:28:28.573482    3866 notify.go:220] Checking for updates...
	I0818 12:28:28.580440    3866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:28:28.584458    3866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:28:28.587474    3866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:28:28.590356    3866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:28:28.593402    3866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:28:28.596735    3866 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:28:28.598339    3866 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 12:28:28.601440    3866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:28:28.605456    3866 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:28:28.610436    3866 start.go:297] selected driver: qemu2
	I0818 12:28:28.610442    3866 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:28:28.610496    3866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:28:28.613088    3866 cni.go:84] Creating CNI manager for ""
	I0818 12:28:28.613107    3866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:28:28.613135    3866 start.go:340] cluster config:
	{Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:28:28.613183    3866 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:28:28.620405    3866 out.go:177] * Starting "stopped-upgrade-521000" primary control-plane node in "stopped-upgrade-521000" cluster
	I0818 12:28:28.624418    3866 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0818 12:28:28.624433    3866 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0818 12:28:28.624440    3866 cache.go:56] Caching tarball of preloaded images
	I0818 12:28:28.624492    3866 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:28:28.624498    3866 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0818 12:28:28.624546    3866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/config.json ...
	I0818 12:28:28.624987    3866 start.go:360] acquireMachinesLock for stopped-upgrade-521000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:28:28.625015    3866 start.go:364] duration metric: took 22.666µs to acquireMachinesLock for "stopped-upgrade-521000"
	I0818 12:28:28.625025    3866 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:28:28.625029    3866 fix.go:54] fixHost starting: 
	I0818 12:28:28.625141    3866 fix.go:112] recreateIfNeeded on stopped-upgrade-521000: state=Stopped err=<nil>
	W0818 12:28:28.625151    3866 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:28:28.633400    3866 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-521000" ...
	I0818 12:28:28.637447    3866 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:28:28.637508    3866 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50437-:22,hostfwd=tcp::50438-:2376,hostname=stopped-upgrade-521000 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/disk.qcow2
	I0818 12:28:28.682640    3866 main.go:141] libmachine: STDOUT: 
	I0818 12:28:28.682666    3866 main.go:141] libmachine: STDERR: 
	I0818 12:28:28.682671    3866 main.go:141] libmachine: Waiting for VM to start (ssh -p 50437 docker@127.0.0.1)...
	I0818 12:28:48.853911    3866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/config.json ...
	I0818 12:28:48.854156    3866 machine.go:93] provisionDockerMachine start ...
	I0818 12:28:48.854213    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:48.854406    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:48.854413    3866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:28:48.917427    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:28:48.917442    3866 buildroot.go:166] provisioning hostname "stopped-upgrade-521000"
	I0818 12:28:48.917495    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:48.917609    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:48.917616    3866 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-521000 && echo "stopped-upgrade-521000" | sudo tee /etc/hostname
	I0818 12:28:48.979277    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-521000
	
	I0818 12:28:48.979327    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:48.979441    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:48.979449    3866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-521000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-521000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-521000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:28:49.040916    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:28:49.040927    3866 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-984/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-984/.minikube}
	I0818 12:28:49.040934    3866 buildroot.go:174] setting up certificates
	I0818 12:28:49.040938    3866 provision.go:84] configureAuth start
	I0818 12:28:49.040944    3866 provision.go:143] copyHostCerts
	I0818 12:28:49.041015    3866 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem, removing ...
	I0818 12:28:49.041021    3866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem
	I0818 12:28:49.041132    3866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/key.pem (1679 bytes)
	I0818 12:28:49.041320    3866 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem, removing ...
	I0818 12:28:49.041323    3866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem
	I0818 12:28:49.041372    3866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/ca.pem (1078 bytes)
	I0818 12:28:49.041474    3866 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem, removing ...
	I0818 12:28:49.041477    3866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem
	I0818 12:28:49.041518    3866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-984/.minikube/cert.pem (1123 bytes)
	I0818 12:28:49.041609    3866 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-521000 san=[127.0.0.1 localhost minikube stopped-upgrade-521000]
	I0818 12:28:49.115774    3866 provision.go:177] copyRemoteCerts
	I0818 12:28:49.115820    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:28:49.115831    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:28:49.147737    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0818 12:28:49.154298    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0818 12:28:49.161151    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:28:49.168439    3866 provision.go:87] duration metric: took 127.493042ms to configureAuth
	I0818 12:28:49.168448    3866 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:28:49.168560    3866 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:28:49.168591    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.168685    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.168690    3866 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:28:49.226692    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:28:49.226701    3866 buildroot.go:70] root file system type: tmpfs
	I0818 12:28:49.226753    3866 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:28:49.226799    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.226925    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.226962    3866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:28:49.290536    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:28:49.290587    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.290698    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.290707    3866 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:28:49.641164    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:28:49.641178    3866 machine.go:96] duration metric: took 787.020792ms to provisionDockerMachine
	I0818 12:28:49.641185    3866 start.go:293] postStartSetup for "stopped-upgrade-521000" (driver="qemu2")
	I0818 12:28:49.641193    3866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:28:49.641257    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:28:49.641265    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:28:49.675135    3866 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:28:49.676489    3866 info.go:137] Remote host: Buildroot 2021.02.12
	I0818 12:28:49.676497    3866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-984/.minikube/addons for local assets ...
	I0818 12:28:49.676574    3866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-984/.minikube/files for local assets ...
	I0818 12:28:49.676672    3866 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem -> 14592.pem in /etc/ssl/certs
	I0818 12:28:49.676766    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:28:49.679750    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem --> /etc/ssl/certs/14592.pem (1708 bytes)
	I0818 12:28:49.686886    3866 start.go:296] duration metric: took 45.693542ms for postStartSetup
	I0818 12:28:49.686904    3866 fix.go:56] duration metric: took 21.062060625s for fixHost
	I0818 12:28:49.686949    3866 main.go:141] libmachine: Using SSH client type: native
	I0818 12:28:49.687069    3866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a045a0] 0x104a06e00 <nil>  [] 0s} localhost 50437 <nil> <nil>}
	I0818 12:28:49.687075    3866 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:28:49.747920    3866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724009330.236153754
	
	I0818 12:28:49.747931    3866 fix.go:216] guest clock: 1724009330.236153754
	I0818 12:28:49.747936    3866 fix.go:229] Guest: 2024-08-18 12:28:50.236153754 -0700 PDT Remote: 2024-08-18 12:28:49.686906 -0700 PDT m=+21.175173084 (delta=549.247754ms)
	I0818 12:28:49.747953    3866 fix.go:200] guest clock delta is within tolerance: 549.247754ms
	I0818 12:28:49.747956    3866 start.go:83] releasing machines lock for "stopped-upgrade-521000", held for 21.123122209s
	I0818 12:28:49.748027    3866 ssh_runner.go:195] Run: cat /version.json
	I0818 12:28:49.748037    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:28:49.748027    3866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:28:49.748066    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	W0818 12:28:49.748698    3866 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50437: connect: connection refused
	I0818 12:28:49.748723    3866 retry.go:31] will retry after 368.413037ms: dial tcp [::1]:50437: connect: connection refused
	W0818 12:28:50.169242    3866 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0818 12:28:50.169417    3866 ssh_runner.go:195] Run: systemctl --version
	I0818 12:28:50.173892    3866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:28:50.177873    3866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:28:50.177952    3866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0818 12:28:50.183391    3866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0818 12:28:50.192006    3866 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:28:50.192025    3866 start.go:495] detecting cgroup driver to use...
	I0818 12:28:50.192168    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:28:50.202557    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0818 12:28:50.206444    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:28:50.209896    3866 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:28:50.209931    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:28:50.213657    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:28:50.217345    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:28:50.220931    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:28:50.224603    3866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:28:50.227809    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:28:50.230483    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:28:50.233399    3866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:28:50.236923    3866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:28:50.239810    3866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:28:50.242353    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:50.326816    3866 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:28:50.337728    3866 start.go:495] detecting cgroup driver to use...
	I0818 12:28:50.337795    3866 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:28:50.343391    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:28:50.348317    3866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:28:50.354366    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:28:50.359164    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:28:50.363681    3866 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:28:50.408908    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:28:50.414001    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:28:50.419287    3866 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:28:50.420557    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:28:50.423373    3866 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0818 12:28:50.428216    3866 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:28:50.503395    3866 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:28:50.566690    3866 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:28:50.566760    3866 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:28:50.571753    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:50.647318    3866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:28:51.786175    3866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.138848625s)
	I0818 12:28:51.786232    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:28:51.794794    3866 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:28:51.801314    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:28:51.806301    3866 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:28:51.869675    3866 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:28:51.949084    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:52.030125    3866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:28:52.036441    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:28:52.041065    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:52.123926    3866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:28:52.163042    3866 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:28:52.163131    3866 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:28:52.165387    3866 start.go:563] Will wait 60s for crictl version
	I0818 12:28:52.165438    3866 ssh_runner.go:195] Run: which crictl
	I0818 12:28:52.166943    3866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:28:52.180924    3866 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0818 12:28:52.180994    3866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:28:52.196482    3866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:28:52.217048    3866 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0818 12:28:52.217188    3866 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0818 12:28:52.218568    3866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:28:52.222100    3866 kubeadm.go:883] updating cluster {Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0818 12:28:52.222146    3866 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0818 12:28:52.222186    3866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:28:52.233943    3866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0818 12:28:52.233952    3866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0818 12:28:52.234002    3866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 12:28:52.237373    3866 ssh_runner.go:195] Run: which lz4
	I0818 12:28:52.238597    3866 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 12:28:52.239829    3866 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 12:28:52.239841    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0818 12:28:53.117329    3866 docker.go:649] duration metric: took 878.7675ms to copy over tarball
	I0818 12:28:53.117391    3866 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 12:28:54.272726    3866 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155330958s)
	I0818 12:28:54.272739    3866 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 12:28:54.288450    3866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 12:28:54.291340    3866 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0818 12:28:54.296086    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:54.377910    3866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:28:55.963779    3866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.585865458s)
	I0818 12:28:55.963869    3866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:28:55.975179    3866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0818 12:28:55.975190    3866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0818 12:28:55.975195    3866 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 12:28:55.980210    3866 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:55.982417    3866 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:55.984243    3866 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:55.984502    3866 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:55.985975    3866 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:55.985977    3866 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:55.987351    3866 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0818 12:28:55.987372    3866 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:55.988746    3866 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:55.988807    3866 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:55.989852    3866 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:55.990652    3866 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0818 12:28:55.991087    3866 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:55.991215    3866 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:55.992027    3866 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:55.992811    3866 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.421971    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:56.424473    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:56.440083    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0818 12:28:56.440484    3866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0818 12:28:56.440514    3866 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:56.440549    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0818 12:28:56.454144    3866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0818 12:28:56.454165    3866 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:56.454217    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0818 12:28:56.464661    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0818 12:28:56.464733    3866 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0818 12:28:56.464759    3866 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0818 12:28:56.464812    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0818 12:28:56.467682    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0818 12:28:56.471808    3866 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0818 12:28:56.471920    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:56.475558    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:56.477122    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0818 12:28:56.477226    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0818 12:28:56.481903    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:56.486926    3866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0818 12:28:56.486948    3866 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:56.486997    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0818 12:28:56.491367    3866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0818 12:28:56.491387    3866 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:56.491370    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0818 12:28:56.491415    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0818 12:28:56.491433    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0818 12:28:56.500150    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.507880    3866 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0818 12:28:56.507906    3866 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:56.507912    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0818 12:28:56.507958    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0818 12:28:56.508023    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0818 12:28:56.513834    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0818 12:28:56.516799    3866 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0818 12:28:56.516811    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0818 12:28:56.524393    3866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0818 12:28:56.524415    3866 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.524431    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0818 12:28:56.524459    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0818 12:28:56.524468    3866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0818 12:28:56.524475    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0818 12:28:56.524551    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0818 12:28:56.568939    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0818 12:28:56.568947    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0818 12:28:56.568967    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0818 12:28:56.568978    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0818 12:28:56.609761    3866 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0818 12:28:56.609785    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0818 12:28:56.646940    3866 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0818 12:28:56.647059    3866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:56.715995    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0818 12:28:56.716030    3866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0818 12:28:56.716050    3866 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:56.716110    3866 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:28:56.764417    3866 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 12:28:56.764546    3866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 12:28:56.777332    3866 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0818 12:28:56.777365    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0818 12:28:56.842756    3866 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 12:28:56.842772    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0818 12:28:57.125925    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 12:28:57.125948    3866 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0818 12:28:57.125957    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0818 12:28:57.277286    3866 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0818 12:28:57.277325    3866 cache_images.go:92] duration metric: took 1.302134333s to LoadCachedImages
	W0818 12:28:57.277368    3866 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0818 12:28:57.277377    3866 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0818 12:28:57.277438    3866 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-521000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:28:57.277506    3866 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:28:57.293590    3866 cni.go:84] Creating CNI manager for ""
	I0818 12:28:57.293602    3866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:28:57.293607    3866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:28:57.293616    3866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-521000 NodeName:stopped-upgrade-521000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:28:57.293684    3866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-521000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:28:57.293734    3866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0818 12:28:57.297197    3866 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:28:57.297224    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 12:28:57.300354    3866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0818 12:28:57.305222    3866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:28:57.310169    3866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0818 12:28:57.315743    3866 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0818 12:28:57.316973    3866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:28:57.320900    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:28:57.400121    3866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:28:57.407205    3866 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000 for IP: 10.0.2.15
	I0818 12:28:57.407212    3866 certs.go:194] generating shared ca certs ...
	I0818 12:28:57.407221    3866 certs.go:226] acquiring lock for ca certs: {Name:mk3b1337311c50e97f8d40ca44614fc311e1e2eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.407389    3866 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-984/.minikube/ca.key
	I0818 12:28:57.407430    3866 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.key
	I0818 12:28:57.407435    3866 certs.go:256] generating profile certs ...
	I0818 12:28:57.407507    3866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.key
	I0818 12:28:57.407524    3866 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e
	I0818 12:28:57.407547    3866 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0818 12:28:57.539209    3866 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e ...
	I0818 12:28:57.539226    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e: {Name:mk981c85252a31c73892b4889a1884da9e2890a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.539541    3866 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e ...
	I0818 12:28:57.539547    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e: {Name:mk95ec7db0cf2e39e6562e99e65de92f1b4ddd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.539686    3866 certs.go:381] copying /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt.4691636e -> /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt
	I0818 12:28:57.540160    3866 certs.go:385] copying /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key.4691636e -> /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key
	I0818 12:28:57.540323    3866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/proxy-client.key
	I0818 12:28:57.540475    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459.pem (1338 bytes)
	W0818 12:28:57.540501    3866 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459_empty.pem, impossibly tiny 0 bytes
	I0818 12:28:57.540509    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:28:57.540547    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem (1078 bytes)
	I0818 12:28:57.540567    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:28:57.540584    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/certs/key.pem (1679 bytes)
	I0818 12:28:57.540623    3866 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem (1708 bytes)
	I0818 12:28:57.540968    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:28:57.548445    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 12:28:57.555055    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:28:57.561807    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 12:28:57.568929    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:28:57.576229    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 12:28:57.582918    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:28:57.589524    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:28:57.597015    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:28:57.604355    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/certs/1459.pem --> /usr/share/ca-certificates/1459.pem (1338 bytes)
	I0818 12:28:57.612193    3866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/ssl/certs/14592.pem --> /usr/share/ca-certificates/14592.pem (1708 bytes)
	I0818 12:28:57.619116    3866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:28:57.623910    3866 ssh_runner.go:195] Run: openssl version
	I0818 12:28:57.625752    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14592.pem && ln -fs /usr/share/ca-certificates/14592.pem /etc/ssl/certs/14592.pem"
	I0818 12:28:57.629456    3866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14592.pem
	I0818 12:28:57.631055    3866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:45 /usr/share/ca-certificates/14592.pem
	I0818 12:28:57.631075    3866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14592.pem
	I0818 12:28:57.632904    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14592.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:28:57.636053    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:28:57.638979    3866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:28:57.640320    3866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:28:57.640340    3866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:28:57.642239    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:28:57.645557    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1459.pem && ln -fs /usr/share/ca-certificates/1459.pem /etc/ssl/certs/1459.pem"
	I0818 12:28:57.648883    3866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1459.pem
	I0818 12:28:57.650331    3866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:45 /usr/share/ca-certificates/1459.pem
	I0818 12:28:57.650349    3866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1459.pem
	I0818 12:28:57.652031    3866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1459.pem /etc/ssl/certs/51391683.0"
	I0818 12:28:57.654863    3866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:28:57.656299    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:28:57.658133    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:28:57.659997    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:28:57.661774    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:28:57.663765    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:28:57.665535    3866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:28:57.667363    3866 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50472 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 12:28:57.667428    3866 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:28:57.677887    3866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:28:57.681351    3866 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:28:57.681356    3866 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:28:57.681378    3866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:28:57.685252    3866 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:28:57.685561    3866 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-521000" does not appear in /Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:28:57.685659    3866 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-984/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-521000" cluster setting kubeconfig missing "stopped-upgrade-521000" context setting]
	I0818 12:28:57.685845    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/kubeconfig: {Name:mked914f07b3885fd33f9b87dfa58e56ae6bca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:28:57.686318    3866 kapi.go:59] client config for stopped-upgrade-521000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fbd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:28:57.686659    3866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:28:57.689479    3866 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-521000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0818 12:28:57.689484    3866 kubeadm.go:1160] stopping kube-system containers ...
	I0818 12:28:57.689518    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:28:57.700535    3866 docker.go:483] Stopping containers: [949b564f2519 6751986ea10a d4daa11446a6 ed27014bf882 48a2672c14a5 d9e0e5771a1b 78e59ac9d2c3 13aeff4a8a09]
	I0818 12:28:57.700602    3866 ssh_runner.go:195] Run: docker stop 949b564f2519 6751986ea10a d4daa11446a6 ed27014bf882 48a2672c14a5 d9e0e5771a1b 78e59ac9d2c3 13aeff4a8a09
	I0818 12:28:57.711095    3866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 12:28:57.716652    3866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 12:28:57.719947    3866 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 12:28:57.719956    3866 kubeadm.go:157] found existing configuration files:
	
	I0818 12:28:57.719994    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf
	I0818 12:28:57.722901    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 12:28:57.722944    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 12:28:57.725514    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf
	I0818 12:28:57.728137    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 12:28:57.728162    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 12:28:57.730573    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf
	I0818 12:28:57.733221    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 12:28:57.733242    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 12:28:57.736365    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf
	I0818 12:28:57.738850    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 12:28:57.738872    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 12:28:57.741578    3866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 12:28:57.744651    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:57.767535    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.202445    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.349904    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.372361    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 12:28:58.394359    3866 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:28:58.394438    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:28:58.896533    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:28:59.396451    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:28:59.400826    3866 api_server.go:72] duration metric: took 1.006476042s to wait for apiserver process to appear ...
	I0818 12:28:59.400835    3866 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:28:59.400850    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:04.402960    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:04.403010    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:09.403478    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:09.403513    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:14.403892    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:14.403929    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:19.404442    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:19.404481    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:24.405525    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:24.405581    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:29.406686    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:29.406724    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:34.407891    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:34.407914    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:39.409394    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:39.409444    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:44.411490    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:44.411544    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:49.413783    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:49.413807    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:54.415961    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:54.416009    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:29:59.418263    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:29:59.418389    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:29:59.429658    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:29:59.429738    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:29:59.440858    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:29:59.440921    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:29:59.451489    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:29:59.451555    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:29:59.463345    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:29:59.463415    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:29:59.478970    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:29:59.479037    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:29:59.491345    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:29:59.491406    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:29:59.502202    3866 logs.go:276] 0 containers: []
	W0818 12:29:59.502213    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:29:59.502266    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:29:59.513427    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:29:59.513446    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:29:59.513452    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:29:59.525450    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:29:59.525465    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:29:59.551900    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:29:59.551915    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:29:59.563454    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:29:59.563466    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:29:59.580220    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:29:59.580234    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:29:59.597441    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:29:59.597453    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:29:59.635390    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:29:59.635398    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:29:59.639553    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:29:59.639562    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:29:59.717784    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:29:59.717796    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:29:59.733298    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:29:59.733311    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:29:59.747374    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:29:59.747387    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:29:59.758538    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:29:59.758552    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:29:59.770218    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:29:59.770229    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:29:59.782715    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:29:59.782725    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:29:59.812188    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:29:59.812199    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:29:59.827368    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:29:59.827378    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:29:59.843397    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:29:59.843410    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:02.357811    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:07.360035    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:07.360337    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:07.384695    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:07.384799    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:07.401525    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:07.401622    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:07.418586    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:07.418661    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:07.429440    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:07.429508    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:07.439965    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:07.440024    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:07.449820    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:07.449888    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:07.459766    3866 logs.go:276] 0 containers: []
	W0818 12:30:07.459777    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:07.459837    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:07.471389    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:07.471406    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:07.471412    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:07.475728    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:07.475736    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:07.515607    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:07.515622    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:07.530480    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:07.530491    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:07.547159    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:07.547168    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:07.559092    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:07.559103    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:07.586040    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:07.586052    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:07.599792    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:07.599804    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:07.636667    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:07.636676    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:07.654125    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:07.654136    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:07.666683    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:07.666692    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:07.677928    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:07.677939    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:07.702284    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:07.702298    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:07.718759    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:07.718770    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:07.732990    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:07.733001    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:07.755377    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:07.755392    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:07.770921    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:07.770930    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:10.285572    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:15.287127    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:15.287332    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:15.313065    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:15.313193    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:15.330439    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:15.330523    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:15.344233    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:15.344301    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:15.356030    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:15.356115    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:15.367211    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:15.367279    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:15.377621    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:15.377683    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:15.388249    3866 logs.go:276] 0 containers: []
	W0818 12:30:15.388262    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:15.388320    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:15.406562    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:15.406583    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:15.406588    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:15.418797    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:15.418808    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:15.443680    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:15.443688    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:15.455599    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:15.455611    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:15.494476    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:15.494484    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:15.518003    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:15.518014    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:15.542525    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:15.542537    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:15.557450    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:15.557461    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:15.569338    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:15.569350    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:15.604315    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:15.604327    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:15.629737    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:15.629748    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:15.643708    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:15.643717    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:15.660825    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:15.660840    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:15.676074    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:15.676087    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:15.680534    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:15.680542    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:15.691568    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:15.691579    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:15.702710    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:15.702721    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:18.215460    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:23.217780    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:23.217896    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:23.230366    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:23.230445    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:23.241394    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:23.241463    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:23.256036    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:23.256106    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:23.266616    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:23.266676    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:23.276368    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:23.276434    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:23.286643    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:23.286717    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:23.297239    3866 logs.go:276] 0 containers: []
	W0818 12:30:23.297252    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:23.297312    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:23.307891    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:23.307911    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:23.307916    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:23.343953    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:23.343967    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:23.355657    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:23.355669    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:23.371612    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:23.371624    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:23.387781    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:23.387797    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:23.399789    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:23.399802    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:23.410726    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:23.410737    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:23.443931    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:23.443943    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:23.458139    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:23.458152    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:23.472597    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:23.472608    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:23.490428    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:23.490438    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:23.515242    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:23.515252    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:23.553422    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:23.553435    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:23.565697    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:23.565707    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:23.578911    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:23.578926    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:23.583449    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:23.583457    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:23.601954    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:23.601966    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:26.115668    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:31.118213    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:31.118498    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:31.145768    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:31.145896    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:31.163003    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:31.163085    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:31.176332    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:31.176405    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:31.188108    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:31.188183    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:31.198370    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:31.198443    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:31.208463    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:31.208524    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:31.222881    3866 logs.go:276] 0 containers: []
	W0818 12:30:31.222893    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:31.222949    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:31.233691    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:31.233708    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:31.233714    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:31.248061    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:31.248071    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:31.259431    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:31.259444    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:31.275358    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:31.275370    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:31.289301    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:31.289313    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:31.301054    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:31.301065    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:31.312395    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:31.312408    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:31.324416    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:31.324427    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:31.342494    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:31.342506    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:31.381729    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:31.381742    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:31.386148    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:31.386158    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:31.423738    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:31.423749    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:31.448757    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:31.448768    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:31.463004    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:31.463013    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:31.474340    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:31.474354    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:31.489279    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:31.489291    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:31.501680    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:31.501694    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:34.026669    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:39.028983    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:39.029213    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:39.052123    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:39.052240    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:39.069719    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:39.069792    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:39.081924    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:39.081993    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:39.092873    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:39.092947    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:39.103033    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:39.103098    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:39.120041    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:39.120107    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:39.131081    3866 logs.go:276] 0 containers: []
	W0818 12:30:39.131092    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:39.131152    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:39.141383    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:39.141402    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:39.141407    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:39.161434    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:39.161448    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:39.175369    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:39.175382    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:39.190065    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:39.190076    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:39.200962    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:39.200971    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:39.213412    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:39.213424    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:39.217936    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:39.217946    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:39.252404    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:39.252415    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:39.264939    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:39.264951    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:39.301538    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:39.301548    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:39.326260    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:39.326272    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:39.348077    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:39.348091    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:39.360773    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:39.360786    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:39.385809    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:39.385817    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:39.400891    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:39.400903    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:39.411780    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:39.411791    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:39.423068    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:39.423081    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:41.935250    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:46.937590    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:46.937787    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:46.955871    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:46.955940    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:46.967850    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:46.967933    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:46.978303    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:46.978370    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:46.988905    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:46.988980    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:47.003081    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:47.003151    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:47.014101    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:47.014164    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:47.024953    3866 logs.go:276] 0 containers: []
	W0818 12:30:47.024965    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:47.025020    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:47.035524    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:47.035541    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:47.035548    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:47.051359    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:47.051371    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:47.063400    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:47.063412    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:47.074997    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:47.075009    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:47.086416    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:47.086429    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:47.111203    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:47.111213    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:47.122932    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:47.122950    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:47.127033    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:47.127038    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:47.156005    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:47.156013    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:47.174040    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:47.174051    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:47.186350    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:47.186368    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:47.224651    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:47.224668    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:47.253693    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:47.253703    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:47.266705    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:47.266716    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:47.284456    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:47.284465    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:47.321226    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:47.321237    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:47.332708    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:47.332718    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:49.848682    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:30:54.850969    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:30:54.851158    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:30:54.877241    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:30:54.877358    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:30:54.893831    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:30:54.893907    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:30:54.906718    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:30:54.906789    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:30:54.918295    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:30:54.918355    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:30:54.930002    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:30:54.930064    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:30:54.940609    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:30:54.940676    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:30:54.950379    3866 logs.go:276] 0 containers: []
	W0818 12:30:54.950390    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:30:54.950447    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:30:54.961138    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:30:54.961156    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:30:54.961162    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:30:54.972675    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:30:54.972689    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:30:54.990511    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:30:54.990525    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:30:55.029438    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:30:55.029450    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:30:55.045257    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:30:55.045267    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:30:55.057145    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:30:55.057156    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:30:55.068850    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:30:55.068862    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:30:55.081331    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:30:55.081342    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:30:55.085580    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:30:55.085586    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:30:55.138483    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:30:55.138496    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:30:55.162550    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:30:55.162561    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:30:55.176680    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:30:55.176693    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:30:55.190978    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:30:55.190991    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:30:55.202295    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:30:55.202310    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:30:55.219819    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:30:55.219829    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:30:55.237242    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:30:55.237253    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:30:55.249396    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:30:55.249431    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:30:57.775006    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:02.777455    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:02.777720    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:02.806737    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:02.806875    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:02.823391    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:02.823473    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:02.836840    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:02.836914    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:02.851194    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:02.851264    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:02.863716    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:02.863790    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:02.874725    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:02.874792    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:02.886613    3866 logs.go:276] 0 containers: []
	W0818 12:31:02.886624    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:02.886682    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:02.897202    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:02.897226    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:02.897232    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:02.922225    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:02.922238    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:02.937643    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:02.937656    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:02.949608    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:02.949620    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:02.987885    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:02.987895    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:03.023438    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:03.023448    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:03.039292    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:03.039303    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:03.051179    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:03.051191    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:03.063435    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:03.063448    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:03.074761    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:03.074772    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:03.088956    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:03.088966    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:03.103742    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:03.103756    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:03.115729    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:03.115741    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:03.133682    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:03.133695    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:03.145015    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:03.145024    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:03.170174    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:03.170184    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:03.174169    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:03.174176    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:05.689745    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:10.692051    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:10.692190    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:10.704541    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:10.704619    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:10.715239    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:10.715310    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:10.725704    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:10.725778    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:10.736500    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:10.736572    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:10.747101    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:10.747172    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:10.757588    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:10.757660    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:10.768152    3866 logs.go:276] 0 containers: []
	W0818 12:31:10.768163    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:10.768224    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:10.778589    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:10.778609    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:10.778615    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:10.782731    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:10.782739    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:10.801294    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:10.801303    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:10.812920    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:10.812931    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:10.852144    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:10.852155    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:10.871482    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:10.871496    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:10.884585    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:10.884598    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:10.922446    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:10.922460    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:10.934640    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:10.934652    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:10.946399    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:10.946410    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:10.963520    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:10.963531    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:10.975655    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:10.975668    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:10.986922    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:10.986932    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:10.998015    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:10.998027    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:11.012405    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:11.012415    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:11.037095    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:11.037105    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:11.061434    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:11.061442    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:13.577541    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:18.579130    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:18.579253    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:18.594819    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:18.594893    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:18.605786    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:18.605861    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:18.616693    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:18.616758    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:18.627348    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:18.627417    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:18.638030    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:18.638095    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:18.648530    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:18.648591    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:18.664117    3866 logs.go:276] 0 containers: []
	W0818 12:31:18.664127    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:18.664182    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:18.674847    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:18.674869    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:18.674875    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:18.686606    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:18.686622    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:18.708900    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:18.708908    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:18.733624    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:18.733635    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:18.745292    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:18.745302    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:18.760752    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:18.760763    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:18.772585    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:18.772600    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:18.783825    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:18.783839    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:18.796507    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:18.796519    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:18.811147    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:18.811158    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:18.827158    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:18.827171    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:18.847250    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:18.847261    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:18.858610    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:18.858625    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:18.871429    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:18.871442    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:18.910739    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:18.910751    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:18.914832    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:18.914841    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:18.948758    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:18.948771    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:21.464953    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:26.467230    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:26.467499    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:26.492035    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:26.492156    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:26.513604    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:26.513684    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:26.525800    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:26.525869    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:26.537340    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:26.537415    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:26.547550    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:26.547613    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:26.566672    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:26.566744    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:26.577000    3866 logs.go:276] 0 containers: []
	W0818 12:31:26.577013    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:26.577072    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:26.588020    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:26.588040    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:26.588047    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:26.592298    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:26.592307    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:26.604190    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:26.604204    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:26.616359    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:26.616374    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:26.654470    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:26.654477    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:26.688655    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:26.688666    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:26.703609    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:26.703620    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:26.735599    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:26.735609    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:26.749491    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:26.749501    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:26.761566    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:26.761579    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:26.774354    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:26.774365    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:26.786144    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:26.786155    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:26.800206    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:26.800215    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:26.811695    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:26.811706    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:26.823207    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:26.823218    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:26.838592    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:26.838605    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:26.856402    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:26.856412    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:29.381720    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:34.384040    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:34.384152    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:34.395561    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:34.395631    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:34.406478    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:34.406553    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:34.417498    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:34.417567    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:34.427896    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:34.427968    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:34.438518    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:34.438587    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:34.454668    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:34.454743    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:34.465107    3866 logs.go:276] 0 containers: []
	W0818 12:31:34.465120    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:34.465176    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:34.479631    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:34.479651    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:34.479656    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:34.496560    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:34.496570    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:34.509726    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:34.509738    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:34.521361    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:34.521374    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:34.557305    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:34.557318    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:34.575046    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:34.575059    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:34.589301    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:34.589314    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:34.601373    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:34.601385    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:34.646072    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:34.646082    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:34.657750    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:34.657762    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:34.673419    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:34.673431    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:34.688275    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:34.688289    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:34.700054    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:34.700066    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:34.724654    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:34.724662    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:34.743232    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:34.743243    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:34.747635    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:34.747644    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:34.771812    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:34.771827    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:37.283966    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:42.286448    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:42.286715    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:42.311731    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:42.311852    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:42.327806    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:42.327886    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:42.340909    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:42.340979    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:42.352389    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:42.352449    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:42.363029    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:42.363099    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:42.373774    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:42.373840    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:42.384499    3866 logs.go:276] 0 containers: []
	W0818 12:31:42.384510    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:42.384573    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:42.396259    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:42.396282    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:42.396289    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:42.401601    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:42.401610    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:42.415566    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:42.415581    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:42.432867    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:42.432881    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:42.453719    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:42.453731    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:42.468015    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:42.468030    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:42.480289    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:42.480305    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:42.517824    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:42.517831    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:42.528849    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:42.528860    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:42.555225    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:42.555240    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:42.566451    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:42.566465    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:42.589408    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:42.589423    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:42.604452    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:42.604462    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:42.615776    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:42.615786    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:42.631245    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:42.631259    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:42.643306    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:42.643319    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:42.665279    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:42.665285    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:45.201446    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:50.204124    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:50.204284    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:50.218387    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:50.218474    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:50.229851    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:50.229919    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:50.240302    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:50.240369    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:50.250397    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:50.250480    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:50.260723    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:50.260796    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:50.271643    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:50.271712    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:50.290989    3866 logs.go:276] 0 containers: []
	W0818 12:31:50.291003    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:50.291064    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:50.301305    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:50.301323    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:50.301329    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:50.337086    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:50.337100    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:50.351526    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:50.351539    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:50.363555    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:50.363567    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:50.367919    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:50.367928    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:50.394047    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:50.394059    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:50.407804    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:50.407813    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:50.419125    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:50.419137    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:50.430204    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:50.430215    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:31:50.452587    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:50.452598    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:50.466342    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:50.466352    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:50.477772    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:50.477784    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:50.489729    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:50.489739    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:50.501614    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:50.501625    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:50.513108    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:50.513119    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:50.552049    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:50.552056    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:50.567025    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:50.567036    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:53.089018    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:31:58.091502    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:31:58.091730    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:31:58.116829    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:31:58.116935    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:31:58.131834    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:31:58.131899    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:31:58.145378    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:31:58.145456    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:31:58.156695    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:31:58.156767    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:31:58.170302    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:31:58.170368    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:31:58.181030    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:31:58.181104    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:31:58.191741    3866 logs.go:276] 0 containers: []
	W0818 12:31:58.191753    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:31:58.191814    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:31:58.202260    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:31:58.202278    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:31:58.202283    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:31:58.227423    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:31:58.227434    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:31:58.238928    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:31:58.238939    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:31:58.250632    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:31:58.250645    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:31:58.264768    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:31:58.264778    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:31:58.282757    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:31:58.282768    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:31:58.320978    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:31:58.320988    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:31:58.325287    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:31:58.325296    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:31:58.339532    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:31:58.339541    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:31:58.353683    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:31:58.353695    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:31:58.366310    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:31:58.366320    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:31:58.382411    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:31:58.382421    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:31:58.394377    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:31:58.394389    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:31:58.407851    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:31:58.407864    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:31:58.449475    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:31:58.449489    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:31:58.462520    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:31:58.462531    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:31:58.474908    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:31:58.474919    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:00.997980    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:06.000274    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:06.000497    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:06.023504    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:06.023601    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:06.038068    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:06.038145    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:06.050500    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:06.050569    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:06.061198    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:06.061268    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:06.071243    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:06.071312    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:06.081552    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:06.081623    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:06.091471    3866 logs.go:276] 0 containers: []
	W0818 12:32:06.091481    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:06.091533    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:06.101874    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:06.101892    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:06.101897    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:06.116508    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:06.116521    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:06.130665    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:06.130675    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:06.154323    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:06.154335    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:06.165988    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:06.166001    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:06.203791    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:06.203801    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:06.208202    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:06.208207    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:06.245825    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:06.245837    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:06.260228    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:06.260238    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:06.271478    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:06.271492    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:06.289539    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:06.289552    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:06.306354    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:06.306365    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:06.330876    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:06.330890    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:06.345704    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:06.345715    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:06.358289    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:06.358300    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:06.373760    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:06.373772    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:06.385869    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:06.385880    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:08.900957    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:13.901277    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:13.901435    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:13.917597    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:13.917676    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:13.930841    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:13.930910    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:13.941443    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:13.941513    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:13.952326    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:13.952394    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:13.963055    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:13.963124    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:13.973972    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:13.974044    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:13.984579    3866 logs.go:276] 0 containers: []
	W0818 12:32:13.984595    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:13.984647    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:13.998826    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:13.998842    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:13.998852    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:14.011045    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:14.011060    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:14.023747    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:14.023761    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:14.035118    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:14.035131    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:14.070796    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:14.070809    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:14.085192    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:14.085203    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:14.097111    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:14.097122    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:14.108598    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:14.108608    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:14.147355    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:14.147367    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:14.161594    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:14.161606    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:14.186482    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:14.186492    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:14.208247    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:14.208256    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:14.220308    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:14.220319    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:14.224864    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:14.224872    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:14.238900    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:14.238914    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:14.251066    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:14.251078    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:14.266990    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:14.267001    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:16.786869    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:21.789215    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:21.789689    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:21.860420    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:21.860467    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:21.890284    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:21.890421    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:21.909076    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:21.909154    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:21.922044    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:21.922118    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:21.937860    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:21.937931    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:21.949108    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:21.949184    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:21.959764    3866 logs.go:276] 0 containers: []
	W0818 12:32:21.959774    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:21.959830    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:21.971485    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:21.971504    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:21.971509    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:21.975915    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:21.975927    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:21.988438    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:21.988455    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:22.001239    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:22.001252    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:22.039244    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:22.039257    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:22.051676    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:22.051688    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:22.071603    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:22.071615    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:22.084855    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:22.084866    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:22.109675    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:22.109686    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:22.122907    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:22.122919    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:22.163587    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:22.163596    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:22.179220    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:22.179231    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:22.198325    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:22.198338    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:22.211777    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:22.211790    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:22.228733    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:22.228743    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:22.240568    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:22.240579    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:22.261314    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:22.261324    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:24.788054    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:29.790487    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:29.790680    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:29.806894    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:29.806979    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:29.819391    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:29.819477    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:29.830172    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:29.830246    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:29.840819    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:29.840895    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:29.852790    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:29.852858    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:29.863088    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:29.863161    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:29.873695    3866 logs.go:276] 0 containers: []
	W0818 12:32:29.873706    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:29.873758    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:29.893736    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:29.893755    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:29.893761    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:29.906881    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:29.906895    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:29.918064    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:29.918077    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:29.935626    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:29.935641    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:29.950395    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:29.950405    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:29.963237    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:29.963247    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:29.986423    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:29.986431    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:30.021986    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:30.021996    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:30.035852    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:30.035863    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:30.050445    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:30.050458    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:30.062098    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:30.062113    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:30.077314    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:30.077327    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:30.088502    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:30.088514    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:30.100193    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:30.100206    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:30.138230    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:30.138245    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:30.142508    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:30.142523    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:30.170263    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:30.170277    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:32.689472    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:37.690475    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:37.690625    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:37.707033    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:37.707097    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:37.719903    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:37.719976    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:37.730674    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:37.730745    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:37.741295    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:37.741368    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:37.755085    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:37.755150    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:37.765322    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:37.765392    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:37.776202    3866 logs.go:276] 0 containers: []
	W0818 12:32:37.776217    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:37.776281    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:37.786674    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:37.786695    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:37.786700    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:37.811010    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:37.811025    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:37.830741    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:37.830753    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:37.848006    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:37.848016    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:37.859591    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:37.859603    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:37.894198    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:37.894209    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:37.908031    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:37.908042    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:37.927841    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:37.927852    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:37.950508    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:37.950521    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:37.965354    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:37.965370    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:37.969836    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:37.969843    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:37.984261    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:37.984272    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:38.006979    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:38.006994    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:38.044302    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:38.044312    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:38.058691    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:38.058705    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:38.070603    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:38.070615    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:38.082460    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:38.082469    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:40.596230    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:45.598573    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:45.598791    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:45.618252    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:45.618343    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:45.633979    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:45.634058    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:45.645309    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:45.645372    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:45.655757    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:45.655826    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:45.665967    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:45.666030    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:45.676320    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:45.676383    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:45.686783    3866 logs.go:276] 0 containers: []
	W0818 12:32:45.686793    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:45.686858    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:45.696722    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:45.696740    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:45.696746    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:45.709245    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:45.709256    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:45.721650    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:45.721661    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:45.759588    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:45.759597    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:45.763838    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:45.763847    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:45.777539    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:45.777550    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:45.801980    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:45.801993    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:45.816202    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:45.816215    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:45.828423    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:45.828434    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:45.840045    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:45.840056    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:45.857008    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:45.857019    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:45.868707    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:45.868717    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:45.891365    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:45.891376    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:45.926394    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:45.926409    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:45.941357    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:45.941367    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:45.952511    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:45.952522    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:45.968638    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:45.968651    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:48.486111    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:32:53.488285    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:32:53.488485    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:32:53.509175    3866 logs.go:276] 2 containers: [c170a4e28d71 d9e0e5771a1b]
	I0818 12:32:53.509274    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:32:53.523740    3866 logs.go:276] 2 containers: [7a4410ec7d1a 48a2672c14a5]
	I0818 12:32:53.523809    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:32:53.539563    3866 logs.go:276] 1 containers: [8dcffe093a65]
	I0818 12:32:53.539625    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:32:53.549887    3866 logs.go:276] 2 containers: [a06d6e1bc7fa d4daa11446a6]
	I0818 12:32:53.549954    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:32:53.560086    3866 logs.go:276] 1 containers: [59aba491be8a]
	I0818 12:32:53.560150    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:32:53.571095    3866 logs.go:276] 2 containers: [5094856f5bfc 949b564f2519]
	I0818 12:32:53.571161    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:32:53.581513    3866 logs.go:276] 0 containers: []
	W0818 12:32:53.581525    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:32:53.581578    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:32:53.592770    3866 logs.go:276] 2 containers: [515bf486fd41 caf3543da0ad]
	I0818 12:32:53.592787    3866 logs.go:123] Gathering logs for etcd [7a4410ec7d1a] ...
	I0818 12:32:53.592793    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4410ec7d1a"
	I0818 12:32:53.607353    3866 logs.go:123] Gathering logs for coredns [8dcffe093a65] ...
	I0818 12:32:53.607362    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dcffe093a65"
	I0818 12:32:53.623839    3866 logs.go:123] Gathering logs for kube-controller-manager [5094856f5bfc] ...
	I0818 12:32:53.623850    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5094856f5bfc"
	I0818 12:32:53.641226    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:32:53.641236    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:32:53.662995    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:32:53.663003    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:32:53.666785    3866 logs.go:123] Gathering logs for kube-scheduler [a06d6e1bc7fa] ...
	I0818 12:32:53.666794    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a06d6e1bc7fa"
	I0818 12:32:53.678687    3866 logs.go:123] Gathering logs for kube-controller-manager [949b564f2519] ...
	I0818 12:32:53.678697    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 949b564f2519"
	I0818 12:32:53.691130    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:32:53.691140    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:32:53.729271    3866 logs.go:123] Gathering logs for kube-apiserver [c170a4e28d71] ...
	I0818 12:32:53.729280    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c170a4e28d71"
	I0818 12:32:53.744603    3866 logs.go:123] Gathering logs for etcd [48a2672c14a5] ...
	I0818 12:32:53.744613    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48a2672c14a5"
	I0818 12:32:53.758946    3866 logs.go:123] Gathering logs for storage-provisioner [caf3543da0ad] ...
	I0818 12:32:53.758956    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 caf3543da0ad"
	I0818 12:32:53.770148    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:32:53.770161    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:32:53.804470    3866 logs.go:123] Gathering logs for kube-apiserver [d9e0e5771a1b] ...
	I0818 12:32:53.804484    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9e0e5771a1b"
	I0818 12:32:53.834113    3866 logs.go:123] Gathering logs for kube-scheduler [d4daa11446a6] ...
	I0818 12:32:53.834124    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4daa11446a6"
	I0818 12:32:53.853471    3866 logs.go:123] Gathering logs for kube-proxy [59aba491be8a] ...
	I0818 12:32:53.853482    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59aba491be8a"
	I0818 12:32:53.865312    3866 logs.go:123] Gathering logs for storage-provisioner [515bf486fd41] ...
	I0818 12:32:53.865323    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bf486fd41"
	I0818 12:32:53.878621    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:32:53.878632    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:32:56.392981    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:01.395166    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:01.395237    3866 kubeadm.go:597] duration metric: took 4m3.716019458s to restartPrimaryControlPlane
	W0818 12:33:01.395281    3866 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 12:33:01.395295    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0818 12:33:02.367713    3866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:33:02.372836    3866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 12:33:02.375888    3866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 12:33:02.378667    3866 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 12:33:02.378674    3866 kubeadm.go:157] found existing configuration files:
	
	I0818 12:33:02.378700    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf
	I0818 12:33:02.381169    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 12:33:02.381191    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 12:33:02.384137    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf
	I0818 12:33:02.386851    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 12:33:02.386874    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 12:33:02.389852    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf
	I0818 12:33:02.392989    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 12:33:02.393023    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 12:33:02.395974    3866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf
	I0818 12:33:02.398493    3866 kubeadm.go:163] "https://control-plane.minikube.internal:50472" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50472 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 12:33:02.398515    3866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 12:33:02.400986    3866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 12:33:02.419068    3866 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0818 12:33:02.419109    3866 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 12:33:02.468015    3866 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 12:33:02.468086    3866 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 12:33:02.468144    3866 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 12:33:02.519467    3866 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 12:33:02.530106    3866 out.go:235]   - Generating certificates and keys ...
	I0818 12:33:02.530148    3866 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 12:33:02.530180    3866 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 12:33:02.530222    3866 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 12:33:02.530251    3866 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 12:33:02.530288    3866 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 12:33:02.530324    3866 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 12:33:02.530364    3866 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 12:33:02.530398    3866 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 12:33:02.530456    3866 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 12:33:02.530493    3866 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 12:33:02.530514    3866 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 12:33:02.530545    3866 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 12:33:02.724821    3866 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 12:33:02.959603    3866 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 12:33:02.991015    3866 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 12:33:03.097811    3866 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 12:33:03.128342    3866 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 12:33:03.129382    3866 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 12:33:03.129511    3866 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 12:33:03.217333    3866 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 12:33:03.221526    3866 out.go:235]   - Booting up control plane ...
	I0818 12:33:03.221578    3866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 12:33:03.221617    3866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 12:33:03.221655    3866 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 12:33:03.221698    3866 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 12:33:03.221773    3866 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 12:33:07.220529    3866 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002115 seconds
	I0818 12:33:07.220596    3866 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 12:33:07.224554    3866 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 12:33:07.737785    3866 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 12:33:07.738031    3866 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-521000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 12:33:08.241605    3866 kubeadm.go:310] [bootstrap-token] Using token: yvloaz.dit76wmmf7nv51fe
	I0818 12:33:08.245015    3866 out.go:235]   - Configuring RBAC rules ...
	I0818 12:33:08.245066    3866 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 12:33:08.245102    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 12:33:08.248837    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 12:33:08.250081    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 12:33:08.251083    3866 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 12:33:08.252326    3866 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 12:33:08.256188    3866 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 12:33:08.442510    3866 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 12:33:08.645527    3866 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 12:33:08.646040    3866 kubeadm.go:310] 
	I0818 12:33:08.646070    3866 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 12:33:08.646074    3866 kubeadm.go:310] 
	I0818 12:33:08.646106    3866 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 12:33:08.646109    3866 kubeadm.go:310] 
	I0818 12:33:08.646122    3866 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 12:33:08.646147    3866 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 12:33:08.646175    3866 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 12:33:08.646178    3866 kubeadm.go:310] 
	I0818 12:33:08.646203    3866 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 12:33:08.646206    3866 kubeadm.go:310] 
	I0818 12:33:08.646238    3866 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 12:33:08.646246    3866 kubeadm.go:310] 
	I0818 12:33:08.646278    3866 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 12:33:08.646317    3866 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 12:33:08.646366    3866 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 12:33:08.646371    3866 kubeadm.go:310] 
	I0818 12:33:08.646422    3866 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 12:33:08.646464    3866 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 12:33:08.646474    3866 kubeadm.go:310] 
	I0818 12:33:08.646523    3866 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yvloaz.dit76wmmf7nv51fe \
	I0818 12:33:08.646577    3866 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 \
	I0818 12:33:08.646587    3866 kubeadm.go:310] 	--control-plane 
	I0818 12:33:08.646591    3866 kubeadm.go:310] 
	I0818 12:33:08.646635    3866 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 12:33:08.646638    3866 kubeadm.go:310] 
	I0818 12:33:08.646689    3866 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yvloaz.dit76wmmf7nv51fe \
	I0818 12:33:08.646752    3866 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d18895eb200fc5d8dee4485c80826dc30d1911aca74865e9ac4dd6ab5b5230f3 
	I0818 12:33:08.646913    3866 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 12:33:08.646922    3866 cni.go:84] Creating CNI manager for ""
	I0818 12:33:08.646930    3866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:33:08.651026    3866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 12:33:08.655031    3866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 12:33:08.657952    3866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 12:33:08.663371    3866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 12:33:08.663425    3866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 12:33:08.663481    3866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-521000 minikube.k8s.io/updated_at=2024_08_18T12_33_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=stopped-upgrade-521000 minikube.k8s.io/primary=true
	I0818 12:33:08.707378    3866 ops.go:34] apiserver oom_adj: -16
	I0818 12:33:08.707378    3866 kubeadm.go:1113] duration metric: took 43.996417ms to wait for elevateKubeSystemPrivileges
	I0818 12:33:08.707495    3866 kubeadm.go:394] duration metric: took 4m11.042339959s to StartCluster
	I0818 12:33:08.707508    3866 settings.go:142] acquiring lock: {Name:mk5a561ec5cb84c336df08f67624cd54d50bdb17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:33:08.707599    3866 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:33:08.708001    3866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/kubeconfig: {Name:mked914f07b3885fd33f9b87dfa58e56ae6bca41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:33:08.708220    3866 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:33:08.708308    3866 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:33:08.708248    3866 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:33:08.708360    3866 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-521000"
	I0818 12:33:08.708377    3866 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-521000"
	W0818 12:33:08.708381    3866 addons.go:243] addon storage-provisioner should already be in state true
	I0818 12:33:08.708388    3866 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-521000"
	I0818 12:33:08.708392    3866 host.go:66] Checking if "stopped-upgrade-521000" exists ...
	I0818 12:33:08.708401    3866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-521000"
	I0818 12:33:08.709387    3866 kapi.go:59] client config for stopped-upgrade-521000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/profiles/stopped-upgrade-521000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-984/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105fbd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:33:08.709541    3866 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-521000"
	W0818 12:33:08.709546    3866 addons.go:243] addon default-storageclass should already be in state true
	I0818 12:33:08.709552    3866 host.go:66] Checking if "stopped-upgrade-521000" exists ...
	I0818 12:33:08.710971    3866 out.go:177] * Verifying Kubernetes components...
	I0818 12:33:08.711292    3866 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 12:33:08.714126    3866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 12:33:08.714135    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:33:08.717934    3866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 12:33:08.721951    3866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:33:08.724928    3866 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:33:08.724934    3866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 12:33:08.724941    3866 sshutil.go:53] new ssh client: &{IP:localhost Port:50437 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/stopped-upgrade-521000/id_rsa Username:docker}
	I0818 12:33:08.795658    3866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:33:08.800924    3866 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:33:08.800966    3866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:33:08.807574    3866 api_server.go:72] duration metric: took 99.343292ms to wait for apiserver process to appear ...
	I0818 12:33:08.807583    3866 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:33:08.807591    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:08.810988    3866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 12:33:08.852747    3866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 12:33:09.195038    3866 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 12:33:09.195050    3866 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 12:33:13.809697    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:13.809744    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:18.810111    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:18.810133    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:23.810586    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:23.810629    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:28.805155    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:28.805200    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:33.801259    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:33.801287    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:38.799166    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:38.799189    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0818 12:33:39.182924    3866 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0818 12:33:39.189141    3866 out.go:177] * Enabled addons: storage-provisioner
	I0818 12:33:39.201057    3866 addons.go:510] duration metric: took 30.507204875s for enable addons: enabled=[storage-provisioner]
	I0818 12:33:43.797927    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:43.797978    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:48.797832    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:48.797861    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:53.798457    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:53.798497    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:33:58.799802    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:33:58.799844    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:03.801320    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:03.801370    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:08.801979    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:08.802120    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:34:08.822153    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:34:08.822237    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:34:08.839453    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:34:08.839516    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:34:08.849954    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:34:08.850026    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:34:08.860896    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:34:08.860963    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:34:08.871462    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:34:08.871524    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:34:08.881797    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:34:08.881870    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:34:08.892196    3866 logs.go:276] 0 containers: []
	W0818 12:34:08.892210    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:34:08.892272    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:34:08.909186    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:34:08.909202    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:34:08.909208    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:34:08.923091    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:34:08.923104    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:34:08.934884    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:34:08.934898    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:34:08.950193    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:34:08.950204    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:34:08.963269    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:34:08.963279    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:34:08.998153    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:34:08.998163    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:34:09.002784    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:34:09.002791    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:34:09.037083    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:34:09.037098    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:34:09.055547    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:34:09.055557    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:34:09.067346    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:34:09.067357    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:34:09.079672    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:34:09.079682    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:34:09.091819    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:34:09.091832    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:34:09.109869    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:34:09.109882    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:34:11.634453    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:16.636404    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:16.636669    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:34:16.658713    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:34:16.658828    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:34:16.674247    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:34:16.674320    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:34:16.687307    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:34:16.687380    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:34:16.698754    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:34:16.698829    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:34:16.708884    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:34:16.708952    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:34:16.719142    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:34:16.719204    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:34:16.729518    3866 logs.go:276] 0 containers: []
	W0818 12:34:16.729530    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:34:16.729587    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:34:16.739990    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:34:16.740006    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:34:16.740011    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:34:16.772907    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:34:16.772914    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:34:16.776868    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:34:16.776874    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:34:16.811017    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:34:16.811029    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:34:16.825408    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:34:16.825420    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:34:16.843926    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:34:16.843937    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:34:16.855380    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:34:16.855390    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:34:16.879962    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:34:16.879970    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:34:16.891218    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:34:16.891232    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:34:16.905089    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:34:16.905099    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:34:16.920794    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:34:16.920804    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:34:16.932068    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:34:16.932082    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:34:16.946973    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:34:16.946985    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:34:19.465462    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:24.468071    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:24.468501    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:34:24.518476    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:34:24.518612    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:34:24.536316    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:34:24.536378    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:34:24.550072    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:34:24.550144    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:34:24.561314    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:34:24.561375    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:34:24.571798    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:34:24.571865    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:34:24.582335    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:34:24.582392    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:34:24.592492    3866 logs.go:276] 0 containers: []
	W0818 12:34:24.592505    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:34:24.592558    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:34:24.602691    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:34:24.602710    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:34:24.602715    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:34:24.638013    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:34:24.638019    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:34:24.653498    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:34:24.653512    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:34:24.665044    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:34:24.665056    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:34:24.676146    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:34:24.676156    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:34:24.692945    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:34:24.692953    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:34:24.704625    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:34:24.704636    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:34:24.728794    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:34:24.728804    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:34:24.733094    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:34:24.733101    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:34:24.770326    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:34:24.770335    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:34:24.784752    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:34:24.784763    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:34:24.803618    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:34:24.803629    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:34:24.815140    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:34:24.815153    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:34:27.328241    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:32.329089    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:32.329545    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:34:32.368783    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:34:32.368915    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:34:32.391228    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:34:32.391340    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:34:32.405849    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:34:32.405932    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:34:32.418385    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:34:32.418453    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:34:32.429180    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:34:32.429252    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:34:32.439468    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:34:32.439535    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:34:32.451814    3866 logs.go:276] 0 containers: []
	W0818 12:34:32.451828    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:34:32.451879    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:34:32.462523    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:34:32.462537    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:34:32.462541    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:34:32.497788    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:34:32.497799    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:34:32.502659    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:34:32.502669    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:34:32.514129    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:34:32.514141    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:34:32.528115    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:34:32.528126    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:34:32.539021    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:34:32.539033    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:34:32.562519    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:34:32.562542    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:34:32.573978    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:34:32.573988    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:34:32.607836    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:34:32.607845    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:34:32.622719    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:34:32.622732    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:34:32.636755    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:34:32.636769    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:34:32.648670    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:34:32.648681    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:34:32.663437    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:34:32.663447    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:34:35.183283    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:40.185682    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:40.186132    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:34:40.229993    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:34:40.230121    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:34:40.254652    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:34:40.254758    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:34:40.268139    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:34:40.268210    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:34:40.280179    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:34:40.280252    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:34:40.296838    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:34:40.296904    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:34:40.307353    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:34:40.307422    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:34:40.317640    3866 logs.go:276] 0 containers: []
	W0818 12:34:40.317651    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:34:40.317708    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:34:40.327880    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:34:40.327896    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:34:40.327902    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:34:40.362560    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:34:40.362570    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:34:40.400300    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:34:40.400312    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:34:40.416271    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:34:40.416281    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:34:40.433280    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:34:40.433290    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:34:40.458034    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:34:40.458042    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:34:40.469659    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:34:40.469670    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:34:40.480591    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:34:40.480601    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:34:40.492204    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:34:40.492218    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:34:40.496370    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:34:40.496377    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:34:40.511896    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:34:40.511909    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:34:40.525929    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:34:40.525938    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:34:40.537278    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:34:40.537290    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:34:43.054383    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:48.057053    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:48.057437    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:34:48.096993    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:34:48.097131    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:34:48.121947    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:34:48.122041    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:34:48.136422    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:34:48.136499    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:34:48.148253    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:34:48.148315    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:34:48.159166    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:34:48.159239    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:34:48.171432    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:34:48.171502    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:34:48.181691    3866 logs.go:276] 0 containers: []
	W0818 12:34:48.181704    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:34:48.181758    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:34:48.192338    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:34:48.192356    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:34:48.192361    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:34:48.216902    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:34:48.216915    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:34:48.232391    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:34:48.232404    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:34:48.267627    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:34:48.267639    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:34:48.281205    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:34:48.281217    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:34:48.295250    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:34:48.295261    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:34:48.307129    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:34:48.307142    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:34:48.321567    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:34:48.321580    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:34:48.332752    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:34:48.332761    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:34:48.337538    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:34:48.337544    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:34:48.372012    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:34:48.372022    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:34:48.388502    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:34:48.388517    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:34:48.400241    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:34:48.400251    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:34:50.926367    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:34:55.928731    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:34:55.929219    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:34:55.969160    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:34:55.969301    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:34:55.990703    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:34:55.990824    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:34:56.006720    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:34:56.006797    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:34:56.020031    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:34:56.020100    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:34:56.031178    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:34:56.031248    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:34:56.042083    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:34:56.042152    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:34:56.053517    3866 logs.go:276] 0 containers: []
	W0818 12:34:56.053530    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:34:56.053587    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:34:56.064385    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:34:56.064399    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:34:56.064405    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:34:56.099648    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:34:56.099661    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:34:56.113973    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:34:56.113984    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:34:56.128051    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:34:56.128062    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:34:56.139813    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:34:56.139826    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:34:56.152288    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:34:56.152301    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:34:56.169766    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:34:56.169778    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:34:56.180936    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:34:56.180945    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:34:56.216555    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:34:56.216577    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:34:56.228514    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:34:56.228526    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:34:56.248379    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:34:56.248389    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:34:56.272827    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:34:56.272836    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:34:56.295718    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:34:56.295727    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:34:58.801777    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:03.804132    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:03.804531    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:03.848270    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:03.848428    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:03.868386    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:03.868486    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:03.885661    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:35:03.885740    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:03.905679    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:03.905746    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:03.919945    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:03.920019    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:03.930299    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:03.930365    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:03.940781    3866 logs.go:276] 0 containers: []
	W0818 12:35:03.940792    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:03.940849    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:03.951404    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:03.951418    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:03.951423    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:03.963457    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:03.963467    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:03.978321    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:03.978333    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:04.002326    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:04.002333    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:04.036969    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:04.036978    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:35:04.071389    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:04.071403    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:04.093720    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:04.093732    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:04.105430    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:04.105441    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:04.125848    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:04.125859    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:04.139087    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:04.139100    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:04.143330    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:04.143338    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:04.157560    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:04.157571    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:04.173109    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:04.173122    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:06.693493    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:11.695351    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:11.695602    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:11.714516    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:11.714599    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:11.728092    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:11.728164    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:11.740160    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:35:11.740242    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:11.751380    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:11.751441    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:11.762131    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:11.762197    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:11.773782    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:11.773844    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:11.785011    3866 logs.go:276] 0 containers: []
	W0818 12:35:11.785025    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:11.785085    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:11.795852    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:11.795867    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:11.795872    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:11.810839    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:11.810850    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:11.823025    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:11.823034    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:11.843489    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:11.843499    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:11.855786    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:11.855796    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:11.879342    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:11.879349    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:11.893878    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:11.893891    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:11.906578    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:11.906590    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:11.918719    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:11.918730    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:11.932858    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:11.932868    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:11.945296    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:11.945307    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:11.978741    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:11.978748    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:11.982834    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:11.982843    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:35:14.519934    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:19.522554    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:19.523001    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:19.563286    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:19.563419    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:19.585004    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:19.585117    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:19.600386    3866 logs.go:276] 2 containers: [28ca01776195 056e579859db]
	I0818 12:35:19.600462    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:19.613003    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:19.613067    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:19.624358    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:19.624426    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:19.636212    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:19.636278    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:19.647509    3866 logs.go:276] 0 containers: []
	W0818 12:35:19.647520    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:19.647578    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:19.658548    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:19.658563    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:19.658567    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:19.674577    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:19.674587    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:19.707550    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:19.707556    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:19.711628    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:19.711637    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:35:19.746867    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:19.746881    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:19.759576    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:19.759589    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:19.772380    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:19.772390    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:19.790877    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:19.790888    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:19.802733    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:19.802747    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:19.826677    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:19.826687    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:19.842481    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:19.842495    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:19.856740    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:19.856751    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:19.869044    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:19.869057    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:22.383421    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:27.385607    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:27.386018    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:27.432709    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:27.432836    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:27.454905    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:27.454982    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:27.469610    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:35:27.469689    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:27.482163    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:27.482222    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:27.493226    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:27.493293    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:27.504233    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:27.504300    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:27.515145    3866 logs.go:276] 0 containers: []
	W0818 12:35:27.515156    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:27.515217    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:27.526086    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:27.526105    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:27.526110    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:27.530413    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:27.530420    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:35:27.565033    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:27.565047    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:27.580864    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:27.580877    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:27.595463    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:27.595474    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:27.609268    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:27.609278    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:27.627170    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:27.627178    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:27.651878    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:27.651887    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:27.686292    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:35:27.686301    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:35:27.697988    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:27.697998    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:27.710628    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:27.710642    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:27.722868    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:27.722882    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:27.735085    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:27.735099    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:27.749164    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:35:27.749177    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:35:27.764813    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:27.764823    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:30.278479    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:35.281302    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:35.281726    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:35.324087    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:35.324223    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:35.346700    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:35.346791    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:35.363004    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:35:35.363085    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:35.382704    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:35.382762    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:35.394098    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:35.394166    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:35.405170    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:35.405235    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:35.420335    3866 logs.go:276] 0 containers: []
	W0818 12:35:35.420347    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:35.420398    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:35.431499    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:35.431517    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:35:35.431522    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:35:35.443132    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:35.443143    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:35.455944    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:35.455957    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:35.468652    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:35.468667    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:35.481334    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:35.481347    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:35.506240    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:35.506248    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:35.510599    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:35.510605    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:35:35.545180    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:35.545192    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:35.560585    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:35.560596    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:35.574958    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:35:35.574968    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:35:35.586669    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:35.586681    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:35.605286    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:35.605298    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:35.617838    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:35.617850    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:35.630383    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:35.630395    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:35.664919    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:35.664927    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:38.198842    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:43.201588    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:43.201980    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:43.238032    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:43.238166    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:43.258103    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:43.258213    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:43.273642    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:35:43.273719    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:43.285643    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:43.285708    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:43.301082    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:43.301146    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:43.311473    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:43.311534    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:43.321485    3866 logs.go:276] 0 containers: []
	W0818 12:35:43.321496    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:43.321552    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:43.332371    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:43.332390    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:43.332396    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:43.348917    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:43.348934    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:43.367360    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:43.367374    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:43.392041    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:43.392052    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:43.403407    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:43.403418    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:43.407929    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:43.407936    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:35:43.442039    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:43.442049    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:43.456713    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:43.456728    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:43.470418    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:35:43.470427    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:35:43.481694    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:35:43.481706    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:35:43.493807    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:43.493820    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:43.505695    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:43.505705    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:43.540905    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:43.540914    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:43.555144    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:43.555153    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:43.573929    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:43.573939    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:46.090738    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:51.092995    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:51.093471    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:51.133254    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:51.133376    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:51.154750    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:51.154861    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:51.169782    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:35:51.169863    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:51.181582    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:51.181653    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:51.195972    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:51.196037    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:51.212236    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:51.212294    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:51.227172    3866 logs.go:276] 0 containers: []
	W0818 12:35:51.227186    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:51.227246    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:51.238045    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:51.238062    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:51.238067    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:35:51.272866    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:51.272877    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:51.291979    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:35:51.291992    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:35:51.306983    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:51.306996    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:51.318937    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:51.318949    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:51.330649    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:51.330662    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:51.354921    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:51.354928    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:51.387676    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:51.387683    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:51.391754    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:51.391760    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:51.407324    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:35:51.407336    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:35:51.418708    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:51.418722    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:51.429868    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:51.429881    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:51.452742    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:51.452755    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:51.464218    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:51.464231    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:51.476729    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:51.476743    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:53.993616    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:35:58.996399    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:35:58.996733    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:35:59.033474    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:35:59.033589    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:35:59.052540    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:35:59.052636    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:35:59.067024    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:35:59.067102    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:35:59.078983    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:35:59.079058    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:35:59.090261    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:35:59.090335    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:35:59.101070    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:35:59.101140    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:35:59.111907    3866 logs.go:276] 0 containers: []
	W0818 12:35:59.111919    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:35:59.111969    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:35:59.122087    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:35:59.122105    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:35:59.122111    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:35:59.133780    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:35:59.133793    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:35:59.145756    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:35:59.145770    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:35:59.157064    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:35:59.157074    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:35:59.180279    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:35:59.180287    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:35:59.191903    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:35:59.191914    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:35:59.205695    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:35:59.205709    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:35:59.209978    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:35:59.209986    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:35:59.224107    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:35:59.224121    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:35:59.238948    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:35:59.238960    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:35:59.254015    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:35:59.254027    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:35:59.274563    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:35:59.274572    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:35:59.285889    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:35:59.285902    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:35:59.319319    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:35:59.319329    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:35:59.331309    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:35:59.331321    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:01.866082    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:36:06.868733    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:36:06.869160    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:36:06.906274    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:36:06.906404    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:36:06.926887    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:36:06.926993    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:36:06.941998    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:36:06.942067    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:36:06.955241    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:36:06.955312    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:36:06.966054    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:36:06.966112    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:36:06.975959    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:36:06.976031    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:36:06.986205    3866 logs.go:276] 0 containers: []
	W0818 12:36:06.986215    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:36:06.986274    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:36:06.996898    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:36:06.996916    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:36:06.996922    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:36:07.009248    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:36:07.009259    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:36:07.026453    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:36:07.026465    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:36:07.038303    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:36:07.038312    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:36:07.042342    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:36:07.042350    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:36:07.056122    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:36:07.056133    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:36:07.068499    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:36:07.068512    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:36:07.080209    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:36:07.080222    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:36:07.109906    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:36:07.109919    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:36:07.128615    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:36:07.128624    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:36:07.144950    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:36:07.144959    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:36:07.156372    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:36:07.156381    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:36:07.190226    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:36:07.190235    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:07.228541    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:36:07.228553    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:36:07.251674    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:36:07.251681    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:36:09.763715    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:36:14.765950    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:36:14.766463    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:36:14.805351    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:36:14.805482    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:36:14.830146    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:36:14.830243    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:36:14.845395    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:36:14.845475    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:36:14.859219    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:36:14.859281    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:36:14.869412    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:36:14.869480    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:36:14.879728    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:36:14.879789    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:36:14.890195    3866 logs.go:276] 0 containers: []
	W0818 12:36:14.890206    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:36:14.890278    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:36:14.900878    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:36:14.900897    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:36:14.900902    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:36:14.912299    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:36:14.912313    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:36:14.946476    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:36:14.946484    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:14.979440    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:36:14.979452    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:36:14.994065    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:36:14.994078    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:36:15.010963    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:36:15.010977    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:36:15.022746    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:36:15.022758    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:36:15.034482    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:36:15.034494    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:36:15.038933    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:36:15.038942    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:36:15.051048    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:36:15.051061    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:36:15.062321    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:36:15.062334    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:36:15.079724    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:36:15.079737    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:36:15.091687    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:36:15.091698    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:36:15.106333    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:36:15.106345    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:36:15.123230    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:36:15.123240    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:36:17.649819    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:36:22.652511    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:36:22.652599    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:36:22.664547    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:36:22.664617    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:36:22.682665    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:36:22.682738    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:36:22.694821    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:36:22.694898    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:36:22.707826    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:36:22.707894    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:36:22.726828    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:36:22.726900    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:36:22.739723    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:36:22.739797    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:36:22.751622    3866 logs.go:276] 0 containers: []
	W0818 12:36:22.751635    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:36:22.751706    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:36:22.763595    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:36:22.763614    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:36:22.763620    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:36:22.779107    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:36:22.779128    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:36:22.793226    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:36:22.793238    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:36:22.819185    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:36:22.819197    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:36:22.823528    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:36:22.823538    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:22.859559    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:36:22.859570    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:36:22.874292    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:36:22.874306    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:36:22.886231    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:36:22.886242    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:36:22.903963    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:36:22.903974    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:36:22.938635    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:36:22.938645    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:36:22.953733    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:36:22.953746    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:36:22.965824    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:36:22.965835    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:36:22.983048    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:36:22.983059    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:36:22.994373    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:36:22.994385    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:36:23.013668    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:36:23.013680    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:36:25.527709    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:36:30.530302    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:36:30.530447    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:36:30.546925    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:36:30.546996    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:36:30.559718    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:36:30.559780    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:36:30.571102    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:36:30.571174    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:36:30.581496    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:36:30.581560    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:36:30.591930    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:36:30.591992    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:36:30.602113    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:36:30.602172    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:36:30.612104    3866 logs.go:276] 0 containers: []
	W0818 12:36:30.612119    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:36:30.612179    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:36:30.630202    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:36:30.630218    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:36:30.630225    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:36:30.634432    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:36:30.634440    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:36:30.645567    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:36:30.645580    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:36:30.657207    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:36:30.657221    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:36:30.674641    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:36:30.674654    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:36:30.685976    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:36:30.685988    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:36:30.702383    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:36:30.702394    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:36:30.721142    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:36:30.721152    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:36:30.733719    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:36:30.733734    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:36:30.748288    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:36:30.748302    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:36:30.760051    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:36:30.760065    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:36:30.785138    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:36:30.785149    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:36:30.819568    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:36:30.819576    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:30.858324    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:36:30.858336    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:36:30.871879    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:36:30.871890    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:36:33.385368    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:36:38.387623    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:36:38.388044    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:36:38.429155    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:36:38.429304    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:36:38.451921    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:36:38.452018    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:36:38.468272    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:36:38.468355    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:36:38.484359    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:36:38.484432    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:36:38.495130    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:36:38.495192    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:36:38.510069    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:36:38.510131    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:36:38.520472    3866 logs.go:276] 0 containers: []
	W0818 12:36:38.520485    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:36:38.520543    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:36:38.531009    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:36:38.531027    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:36:38.531032    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:38.567505    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:36:38.567519    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:36:38.579241    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:36:38.579255    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:36:38.593646    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:36:38.593661    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:36:38.616314    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:36:38.616322    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:36:38.648905    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:36:38.648911    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:36:38.653271    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:36:38.653280    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:36:38.665285    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:36:38.665296    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:36:38.682525    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:36:38.682536    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:36:38.694422    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:36:38.694436    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:36:38.708778    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:36:38.708792    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:36:38.720365    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:36:38.720374    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:36:38.732237    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:36:38.732251    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:36:38.746477    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:36:38.746487    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:36:38.757556    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:36:38.757567    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:36:41.269107    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:36:46.272196    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:36:46.272276    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:36:46.283484    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:36:46.283537    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:36:46.294180    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:36:46.294244    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:36:46.305364    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:36:46.305419    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:36:46.317509    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:36:46.317561    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:36:46.331941    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:36:46.331998    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:36:46.343114    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:36:46.343173    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:36:46.355057    3866 logs.go:276] 0 containers: []
	W0818 12:36:46.355066    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:36:46.355123    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:36:46.367024    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:36:46.367038    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:36:46.367042    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:36:46.381970    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:36:46.381987    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:36:46.395069    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:36:46.395080    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:36:46.414011    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:36:46.414022    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:36:46.426015    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:36:46.426027    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:36:46.438483    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:36:46.438494    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:36:46.457095    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:36:46.457104    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:36:46.472183    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:36:46.472195    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:36:46.507063    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:36:46.507073    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:36:46.518876    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:36:46.518887    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:36:46.523145    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:36:46.523156    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:46.560336    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:36:46.560347    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:36:46.575656    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:36:46.575667    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:36:46.588697    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:36:46.588706    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:36:46.600159    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:36:46.600167    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:36:49.125971    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:36:54.128328    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:36:54.128766    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:36:54.168169    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:36:54.168310    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:36:54.190108    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:36:54.190225    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:36:54.206642    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:36:54.206717    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:36:54.219119    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:36:54.219183    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:36:54.230014    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:36:54.230089    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:36:54.240514    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:36:54.240583    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:36:54.250553    3866 logs.go:276] 0 containers: []
	W0818 12:36:54.250563    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:36:54.250623    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:36:54.265882    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:36:54.265900    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:36:54.265905    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:36:54.300308    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:36:54.300316    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:36:54.339458    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:36:54.339471    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:36:54.352746    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:36:54.352759    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:36:54.369322    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:36:54.369338    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:36:54.389099    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:36:54.389121    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:36:54.414629    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:36:54.414649    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:36:54.419940    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:36:54.419952    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:36:54.436492    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:36:54.436505    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:36:54.450087    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:36:54.450100    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:36:54.462978    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:36:54.462990    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:36:54.475998    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:36:54.476012    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:36:54.489596    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:36:54.489607    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:36:54.504834    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:36:54.504845    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:36:54.518324    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:36:54.518334    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:36:57.033749    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:37:02.035952    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:37:02.036260    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0818 12:37:02.067313    3866 logs.go:276] 1 containers: [5fbb37fc2ae4]
	I0818 12:37:02.067435    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0818 12:37:02.086453    3866 logs.go:276] 1 containers: [ecb2b9e3ca9a]
	I0818 12:37:02.086555    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0818 12:37:02.100999    3866 logs.go:276] 4 containers: [2e56635ec6fd e9d0772f5894 28ca01776195 056e579859db]
	I0818 12:37:02.101061    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0818 12:37:02.112890    3866 logs.go:276] 1 containers: [189f50144f13]
	I0818 12:37:02.112948    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0818 12:37:02.123454    3866 logs.go:276] 1 containers: [581eec9c2066]
	I0818 12:37:02.123528    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0818 12:37:02.133937    3866 logs.go:276] 1 containers: [323377d12265]
	I0818 12:37:02.134022    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0818 12:37:02.144490    3866 logs.go:276] 0 containers: []
	W0818 12:37:02.144503    3866 logs.go:278] No container was found matching "kindnet"
	I0818 12:37:02.144551    3866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0818 12:37:02.154813    3866 logs.go:276] 1 containers: [8522a7793307]
	I0818 12:37:02.154829    3866 logs.go:123] Gathering logs for coredns [e9d0772f5894] ...
	I0818 12:37:02.154835    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d0772f5894"
	I0818 12:37:02.168053    3866 logs.go:123] Gathering logs for kube-controller-manager [323377d12265] ...
	I0818 12:37:02.168064    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 323377d12265"
	I0818 12:37:02.185903    3866 logs.go:123] Gathering logs for etcd [ecb2b9e3ca9a] ...
	I0818 12:37:02.185913    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecb2b9e3ca9a"
	I0818 12:37:02.200184    3866 logs.go:123] Gathering logs for container status ...
	I0818 12:37:02.200196    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 12:37:02.217008    3866 logs.go:123] Gathering logs for describe nodes ...
	I0818 12:37:02.217021    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 12:37:02.251738    3866 logs.go:123] Gathering logs for dmesg ...
	I0818 12:37:02.251750    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 12:37:02.256197    3866 logs.go:123] Gathering logs for kube-apiserver [5fbb37fc2ae4] ...
	I0818 12:37:02.256204    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fbb37fc2ae4"
	I0818 12:37:02.274144    3866 logs.go:123] Gathering logs for kube-scheduler [189f50144f13] ...
	I0818 12:37:02.274156    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 189f50144f13"
	I0818 12:37:02.291747    3866 logs.go:123] Gathering logs for kube-proxy [581eec9c2066] ...
	I0818 12:37:02.291758    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 581eec9c2066"
	I0818 12:37:02.305667    3866 logs.go:123] Gathering logs for storage-provisioner [8522a7793307] ...
	I0818 12:37:02.305679    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8522a7793307"
	I0818 12:37:02.319102    3866 logs.go:123] Gathering logs for kubelet ...
	I0818 12:37:02.319115    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 12:37:02.354486    3866 logs.go:123] Gathering logs for coredns [28ca01776195] ...
	I0818 12:37:02.354494    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28ca01776195"
	I0818 12:37:02.365394    3866 logs.go:123] Gathering logs for coredns [056e579859db] ...
	I0818 12:37:02.365404    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 056e579859db"
	I0818 12:37:02.377357    3866 logs.go:123] Gathering logs for Docker ...
	I0818 12:37:02.377366    3866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0818 12:37:02.401161    3866 logs.go:123] Gathering logs for coredns [2e56635ec6fd] ...
	I0818 12:37:02.401174    3866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e56635ec6fd"
	I0818 12:37:04.914995    3866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0818 12:37:09.916977    3866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0818 12:37:09.921046    3866 out.go:201] 
	W0818 12:37:09.927070    3866 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0818 12:37:09.927141    3866 out.go:270] * 
	* 
	W0818 12:37:09.927596    3866 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:09.940079    3866 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-521000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.08s)

                                                
                                    
x
+
TestPause/serial/Start (9.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-042000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-042000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.787999584s)

                                                
                                                
-- stdout --
	* [pause-042000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-042000" primary control-plane node in "pause-042000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-042000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-042000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-042000 -n pause-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-042000 -n pause-042000: exit status 7 (60.424625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-621000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-621000 --driver=qemu2 : exit status 80 (9.8435625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-621000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-621000" primary control-plane node in "NoKubernetes-621000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-621000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-621000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-621000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000: exit status 7 (62.559583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-621000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --driver=qemu2 : exit status 80 (5.262146125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-621000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-621000
	* Restarting existing qemu2 VM for "NoKubernetes-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-621000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000: exit status 7 (63.771291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-621000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247259792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-621000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-621000
	* Restarting existing qemu2 VM for "NoKubernetes-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-621000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000: exit status 7 (70.881417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-621000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-621000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-621000 --driver=qemu2 : exit status 80 (5.270320958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-621000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-621000
	* Restarting existing qemu2 VM for "NoKubernetes-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-621000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-621000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-621000 -n NoKubernetes-621000: exit status 7 (37.243458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-621000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.90883875s)

                                                
                                                
-- stdout --
	* [auto-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-937000" primary control-plane node in "auto-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:35:21.908137    4384 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:35:21.908270    4384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:21.908273    4384 out.go:358] Setting ErrFile to fd 2...
	I0818 12:35:21.908275    4384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:21.908394    4384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:35:21.909550    4384 out.go:352] Setting JSON to false
	I0818 12:35:21.926269    4384 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3891,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:35:21.926350    4384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:35:21.931123    4384 out.go:177] * [auto-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:35:21.939117    4384 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:35:21.939218    4384 notify.go:220] Checking for updates...
	I0818 12:35:21.946096    4384 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:35:21.949144    4384 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:35:21.952096    4384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:35:21.955118    4384 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:35:21.958110    4384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:35:21.961382    4384 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:35:21.961451    4384 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:35:21.961497    4384 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:35:21.966100    4384 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:35:21.969094    4384 start.go:297] selected driver: qemu2
	I0818 12:35:21.969099    4384 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:35:21.969104    4384 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:35:21.971282    4384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:35:21.974077    4384 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:35:21.977202    4384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:35:21.977235    4384 cni.go:84] Creating CNI manager for ""
	I0818 12:35:21.977249    4384 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:35:21.977253    4384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:35:21.977278    4384 start.go:340] cluster config:
	{Name:auto-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:35:21.980701    4384 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:35:21.988106    4384 out.go:177] * Starting "auto-937000" primary control-plane node in "auto-937000" cluster
	I0818 12:35:21.992104    4384 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:35:21.992122    4384 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:35:21.992131    4384 cache.go:56] Caching tarball of preloaded images
	I0818 12:35:21.992200    4384 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:35:21.992205    4384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:35:21.992259    4384 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/auto-937000/config.json ...
	I0818 12:35:21.992269    4384 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/auto-937000/config.json: {Name:mk3c8a41afdff9edd08e0f2dd71b0164072f538e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:35:21.992593    4384 start.go:360] acquireMachinesLock for auto-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:21.992623    4384 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "auto-937000"
	I0818 12:35:21.992634    4384 start.go:93] Provisioning new machine with config: &{Name:auto-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:21.992680    4384 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:35:21.997048    4384 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:35:22.012204    4384 start.go:159] libmachine.API.Create for "auto-937000" (driver="qemu2")
	I0818 12:35:22.012228    4384 client.go:168] LocalClient.Create starting
	I0818 12:35:22.012288    4384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:35:22.012318    4384 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:22.012331    4384 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:22.012365    4384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:35:22.012388    4384 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:22.012397    4384 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:22.012773    4384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:35:22.163342    4384 main.go:141] libmachine: Creating SSH key...
	I0818 12:35:22.300369    4384 main.go:141] libmachine: Creating Disk image...
	I0818 12:35:22.300382    4384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:35:22.300621    4384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2
	I0818 12:35:22.310141    4384 main.go:141] libmachine: STDOUT: 
	I0818 12:35:22.310157    4384 main.go:141] libmachine: STDERR: 
	I0818 12:35:22.310211    4384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2 +20000M
	I0818 12:35:22.318446    4384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:35:22.318463    4384 main.go:141] libmachine: STDERR: 
	I0818 12:35:22.318476    4384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2
	I0818 12:35:22.318480    4384 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:35:22.318490    4384 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:35:22.318516    4384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:17:c7:bb:73:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2
	I0818 12:35:22.320221    4384 main.go:141] libmachine: STDOUT: 
	I0818 12:35:22.320238    4384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:35:22.320254    4384 client.go:171] duration metric: took 308.026ms to LocalClient.Create
	I0818 12:35:24.322430    4384 start.go:128] duration metric: took 2.3297535s to createHost
	I0818 12:35:24.322514    4384 start.go:83] releasing machines lock for "auto-937000", held for 2.329916208s
	W0818 12:35:24.322614    4384 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:24.329020    4384 out.go:177] * Deleting "auto-937000" in qemu2 ...
	W0818 12:35:24.360454    4384 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:24.360491    4384 start.go:729] Will try again in 5 seconds ...
	I0818 12:35:29.362691    4384 start.go:360] acquireMachinesLock for auto-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:29.363349    4384 start.go:364] duration metric: took 530.916µs to acquireMachinesLock for "auto-937000"
	I0818 12:35:29.363524    4384 start.go:93] Provisioning new machine with config: &{Name:auto-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:29.363870    4384 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:35:29.368614    4384 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:35:29.418355    4384 start.go:159] libmachine.API.Create for "auto-937000" (driver="qemu2")
	I0818 12:35:29.418448    4384 client.go:168] LocalClient.Create starting
	I0818 12:35:29.418644    4384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:35:29.418717    4384 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:29.418735    4384 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:29.418800    4384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:35:29.418845    4384 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:29.418857    4384 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:29.419491    4384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:35:29.581200    4384 main.go:141] libmachine: Creating SSH key...
	I0818 12:35:29.722742    4384 main.go:141] libmachine: Creating Disk image...
	I0818 12:35:29.722755    4384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:35:29.723010    4384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2
	I0818 12:35:29.732541    4384 main.go:141] libmachine: STDOUT: 
	I0818 12:35:29.732563    4384 main.go:141] libmachine: STDERR: 
	I0818 12:35:29.732626    4384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2 +20000M
	I0818 12:35:29.740795    4384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:35:29.740825    4384 main.go:141] libmachine: STDERR: 
	I0818 12:35:29.740839    4384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2
	I0818 12:35:29.740846    4384 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:35:29.740855    4384 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:35:29.740884    4384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8b:e0:57:d9:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/auto-937000/disk.qcow2
	I0818 12:35:29.742604    4384 main.go:141] libmachine: STDOUT: 
	I0818 12:35:29.742618    4384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:35:29.742629    4384 client.go:171] duration metric: took 324.149709ms to LocalClient.Create
	I0818 12:35:31.744799    4384 start.go:128] duration metric: took 2.380928334s to createHost
	I0818 12:35:31.744861    4384 start.go:83] releasing machines lock for "auto-937000", held for 2.3815155s
	W0818 12:35:31.745228    4384 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:31.759890    4384 out.go:201] 
	W0818 12:35:31.763986    4384 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:35:31.764014    4384 out.go:270] * 
	* 
	W0818 12:35:31.766707    4384 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:35:31.773925    4384 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.783577792s)

                                                
                                                
-- stdout --
	* [kindnet-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-937000" primary control-plane node in "kindnet-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:35:33.944565    4498 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:35:33.944696    4498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:33.944700    4498 out.go:358] Setting ErrFile to fd 2...
	I0818 12:35:33.944702    4498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:33.944845    4498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:35:33.945936    4498 out.go:352] Setting JSON to false
	I0818 12:35:33.962480    4498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3903,"bootTime":1724005830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:35:33.962544    4498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:35:33.968264    4498 out.go:177] * [kindnet-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:35:33.976138    4498 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:35:33.976187    4498 notify.go:220] Checking for updates...
	I0818 12:35:33.982080    4498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:35:33.985103    4498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:35:33.988102    4498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:35:33.995150    4498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:35:33.998171    4498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:35:34.001421    4498 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:35:34.001488    4498 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:35:34.001535    4498 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:35:34.002794    4498 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:35:34.011112    4498 start.go:297] selected driver: qemu2
	I0818 12:35:34.011119    4498 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:35:34.011126    4498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:35:34.013188    4498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:35:34.016044    4498 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:35:34.020167    4498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:35:34.020187    4498 cni.go:84] Creating CNI manager for "kindnet"
	I0818 12:35:34.020193    4498 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 12:35:34.020220    4498 start.go:340] cluster config:
	{Name:kindnet-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:35:34.023308    4498 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:35:34.030093    4498 out.go:177] * Starting "kindnet-937000" primary control-plane node in "kindnet-937000" cluster
	I0818 12:35:34.034121    4498 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:35:34.034137    4498 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:35:34.034145    4498 cache.go:56] Caching tarball of preloaded images
	I0818 12:35:34.034198    4498 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:35:34.034206    4498 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:35:34.034260    4498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/kindnet-937000/config.json ...
	I0818 12:35:34.034271    4498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/kindnet-937000/config.json: {Name:mk2c7fed30b705f8e0002b361a0108b7182ed834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:35:34.034517    4498 start.go:360] acquireMachinesLock for kindnet-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:34.034551    4498 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "kindnet-937000"
	I0818 12:35:34.034563    4498 start.go:93] Provisioning new machine with config: &{Name:kindnet-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:34.034601    4498 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:35:34.041144    4498 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:35:34.056331    4498 start.go:159] libmachine.API.Create for "kindnet-937000" (driver="qemu2")
	I0818 12:35:34.056355    4498 client.go:168] LocalClient.Create starting
	I0818 12:35:34.056415    4498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:35:34.056445    4498 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:34.056456    4498 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:34.056494    4498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:35:34.056516    4498 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:34.056524    4498 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:34.056975    4498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:35:34.209165    4498 main.go:141] libmachine: Creating SSH key...
	I0818 12:35:34.256373    4498 main.go:141] libmachine: Creating Disk image...
	I0818 12:35:34.256379    4498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:35:34.256608    4498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2
	I0818 12:35:34.266169    4498 main.go:141] libmachine: STDOUT: 
	I0818 12:35:34.266187    4498 main.go:141] libmachine: STDERR: 
	I0818 12:35:34.266228    4498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2 +20000M
	I0818 12:35:34.274289    4498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:35:34.274304    4498 main.go:141] libmachine: STDERR: 
	I0818 12:35:34.274316    4498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2
	I0818 12:35:34.274322    4498 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:35:34.274344    4498 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:35:34.274370    4498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:60:8c:99:7b:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2
	I0818 12:35:34.276026    4498 main.go:141] libmachine: STDOUT: 
	I0818 12:35:34.276044    4498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:35:34.276061    4498 client.go:171] duration metric: took 219.706125ms to LocalClient.Create
	I0818 12:35:36.278230    4498 start.go:128] duration metric: took 2.243622458s to createHost
	I0818 12:35:36.278303    4498 start.go:83] releasing machines lock for "kindnet-937000", held for 2.243775917s
	W0818 12:35:36.278396    4498 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:36.295958    4498 out.go:177] * Deleting "kindnet-937000" in qemu2 ...
	W0818 12:35:36.324465    4498 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:36.324494    4498 start.go:729] Will try again in 5 seconds ...
	I0818 12:35:41.325359    4498 start.go:360] acquireMachinesLock for kindnet-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:41.326124    4498 start.go:364] duration metric: took 652µs to acquireMachinesLock for "kindnet-937000"
	I0818 12:35:41.326315    4498 start.go:93] Provisioning new machine with config: &{Name:kindnet-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:41.326705    4498 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:35:41.332387    4498 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:35:41.382759    4498 start.go:159] libmachine.API.Create for "kindnet-937000" (driver="qemu2")
	I0818 12:35:41.382823    4498 client.go:168] LocalClient.Create starting
	I0818 12:35:41.382943    4498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:35:41.383006    4498 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:41.383041    4498 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:41.383142    4498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:35:41.383192    4498 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:41.383202    4498 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:41.383750    4498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:35:41.543396    4498 main.go:141] libmachine: Creating SSH key...
	I0818 12:35:41.646508    4498 main.go:141] libmachine: Creating Disk image...
	I0818 12:35:41.646518    4498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:35:41.646752    4498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2
	I0818 12:35:41.656451    4498 main.go:141] libmachine: STDOUT: 
	I0818 12:35:41.656472    4498 main.go:141] libmachine: STDERR: 
	I0818 12:35:41.656518    4498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2 +20000M
	I0818 12:35:41.664658    4498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:35:41.664683    4498 main.go:141] libmachine: STDERR: 
	I0818 12:35:41.664695    4498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2
	I0818 12:35:41.664700    4498 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:35:41.664705    4498 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:35:41.664747    4498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:0e:66:42:5b:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kindnet-937000/disk.qcow2
	I0818 12:35:41.666507    4498 main.go:141] libmachine: STDOUT: 
	I0818 12:35:41.666523    4498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:35:41.666536    4498 client.go:171] duration metric: took 283.712208ms to LocalClient.Create
	I0818 12:35:43.668600    4498 start.go:128] duration metric: took 2.341886459s to createHost
	I0818 12:35:43.668634    4498 start.go:83] releasing machines lock for "kindnet-937000", held for 2.342475875s
	W0818 12:35:43.668760    4498 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:43.677208    4498 out.go:201] 
	W0818 12:35:43.682012    4498 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:35:43.682019    4498 out.go:270] * 
	* 
	W0818 12:35:43.682527    4498 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:35:43.692941    4498 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0818 12:35:51.722435    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.815287875s)

                                                
                                                
-- stdout --
	* [calico-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-937000" primary control-plane node in "calico-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:35:45.925308    4612 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:35:45.925443    4612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:45.925447    4612 out.go:358] Setting ErrFile to fd 2...
	I0818 12:35:45.925449    4612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:45.925589    4612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:35:45.926759    4612 out.go:352] Setting JSON to false
	I0818 12:35:45.943641    4612 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3915,"bootTime":1724005830,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:35:45.943719    4612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:35:45.950091    4612 out.go:177] * [calico-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:35:45.957014    4612 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:35:45.957029    4612 notify.go:220] Checking for updates...
	I0818 12:35:45.963989    4612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:35:45.967013    4612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:35:45.970048    4612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:35:45.972973    4612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:35:45.975972    4612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:35:45.979308    4612 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:35:45.979377    4612 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:35:45.979431    4612 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:35:45.982899    4612 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:35:45.990017    4612 start.go:297] selected driver: qemu2
	I0818 12:35:45.990023    4612 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:35:45.990029    4612 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:35:45.992078    4612 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:35:45.993474    4612 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:35:45.996073    4612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:35:45.996094    4612 cni.go:84] Creating CNI manager for "calico"
	I0818 12:35:45.996105    4612 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0818 12:35:45.996135    4612 start.go:340] cluster config:
	{Name:calico-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:35:45.999511    4612 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:35:46.006962    4612 out.go:177] * Starting "calico-937000" primary control-plane node in "calico-937000" cluster
	I0818 12:35:46.010963    4612 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:35:46.010981    4612 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:35:46.010987    4612 cache.go:56] Caching tarball of preloaded images
	I0818 12:35:46.011066    4612 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:35:46.011074    4612 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:35:46.011132    4612 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/calico-937000/config.json ...
	I0818 12:35:46.011144    4612 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/calico-937000/config.json: {Name:mkfd1e8f6d50911e16279661a938a158413b674d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:35:46.011404    4612 start.go:360] acquireMachinesLock for calico-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:46.011442    4612 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "calico-937000"
	I0818 12:35:46.011454    4612 start.go:93] Provisioning new machine with config: &{Name:calico-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:46.011482    4612 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:35:46.015957    4612 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:35:46.031266    4612 start.go:159] libmachine.API.Create for "calico-937000" (driver="qemu2")
	I0818 12:35:46.031289    4612 client.go:168] LocalClient.Create starting
	I0818 12:35:46.031353    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:35:46.031384    4612 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:46.031394    4612 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:46.031430    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:35:46.031454    4612 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:46.031461    4612 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:46.031850    4612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:35:46.183413    4612 main.go:141] libmachine: Creating SSH key...
	I0818 12:35:46.281201    4612 main.go:141] libmachine: Creating Disk image...
	I0818 12:35:46.281207    4612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:35:46.281434    4612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2
	I0818 12:35:46.290866    4612 main.go:141] libmachine: STDOUT: 
	I0818 12:35:46.290895    4612 main.go:141] libmachine: STDERR: 
	I0818 12:35:46.290949    4612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2 +20000M
	I0818 12:35:46.299108    4612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:35:46.299129    4612 main.go:141] libmachine: STDERR: 
	I0818 12:35:46.299153    4612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2
	I0818 12:35:46.299158    4612 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:35:46.299174    4612 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:35:46.299196    4612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:de:b2:14:26:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2
	I0818 12:35:46.300857    4612 main.go:141] libmachine: STDOUT: 
	I0818 12:35:46.300874    4612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:35:46.300893    4612 client.go:171] duration metric: took 269.602666ms to LocalClient.Create
	I0818 12:35:48.303069    4612 start.go:128] duration metric: took 2.291591375s to createHost
	I0818 12:35:48.303157    4612 start.go:83] releasing machines lock for "calico-937000", held for 2.2917385s
	W0818 12:35:48.303297    4612 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:48.321678    4612 out.go:177] * Deleting "calico-937000" in qemu2 ...
	W0818 12:35:48.350003    4612 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:48.350037    4612 start.go:729] Will try again in 5 seconds ...
	I0818 12:35:53.352231    4612 start.go:360] acquireMachinesLock for calico-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:53.352759    4612 start.go:364] duration metric: took 395.958µs to acquireMachinesLock for "calico-937000"
	I0818 12:35:53.352889    4612 start.go:93] Provisioning new machine with config: &{Name:calico-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:53.353204    4612 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:35:53.362197    4612 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:35:53.410645    4612 start.go:159] libmachine.API.Create for "calico-937000" (driver="qemu2")
	I0818 12:35:53.410701    4612 client.go:168] LocalClient.Create starting
	I0818 12:35:53.410814    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:35:53.410885    4612 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:53.410903    4612 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:53.410974    4612 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:35:53.411019    4612 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:53.411029    4612 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:53.411590    4612 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:35:53.581790    4612 main.go:141] libmachine: Creating SSH key...
	I0818 12:35:53.646872    4612 main.go:141] libmachine: Creating Disk image...
	I0818 12:35:53.646878    4612 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:35:53.647109    4612 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2
	I0818 12:35:53.656691    4612 main.go:141] libmachine: STDOUT: 
	I0818 12:35:53.656708    4612 main.go:141] libmachine: STDERR: 
	I0818 12:35:53.656756    4612 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2 +20000M
	I0818 12:35:53.665031    4612 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:35:53.665047    4612 main.go:141] libmachine: STDERR: 
	I0818 12:35:53.665077    4612 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2
	I0818 12:35:53.665082    4612 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:35:53.665092    4612 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:35:53.665126    4612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:1a:3c:7b:25:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/calico-937000/disk.qcow2
	I0818 12:35:53.666883    4612 main.go:141] libmachine: STDOUT: 
	I0818 12:35:53.666901    4612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:35:53.666912    4612 client.go:171] duration metric: took 256.205083ms to LocalClient.Create
	I0818 12:35:55.669103    4612 start.go:128] duration metric: took 2.31585975s to createHost
	I0818 12:35:55.669179    4612 start.go:83] releasing machines lock for "calico-937000", held for 2.316428s
	W0818 12:35:55.669541    4612 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:35:55.681183    4612 out.go:201] 
	W0818 12:35:55.687226    4612 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:35:55.687247    4612 out.go:270] * 
	* 
	W0818 12:35:55.689027    4612 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:35:55.699160    4612 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.894096458s)

                                                
                                                
-- stdout --
	* [custom-flannel-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-937000" primary control-plane node in "custom-flannel-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:35:58.140388    4729 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:35:58.140530    4729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:58.140533    4729 out.go:358] Setting ErrFile to fd 2...
	I0818 12:35:58.140536    4729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:35:58.140665    4729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:35:58.141729    4729 out.go:352] Setting JSON to false
	I0818 12:35:58.158237    4729 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3928,"bootTime":1724005830,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:35:58.158307    4729 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:35:58.165371    4729 out.go:177] * [custom-flannel-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:35:58.171003    4729 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:35:58.171078    4729 notify.go:220] Checking for updates...
	I0818 12:35:58.178357    4729 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:35:58.179685    4729 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:35:58.182380    4729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:35:58.185389    4729 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:35:58.188353    4729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:35:58.191769    4729 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:35:58.191832    4729 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:35:58.191881    4729 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:35:58.196351    4729 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:35:58.203413    4729 start.go:297] selected driver: qemu2
	I0818 12:35:58.203421    4729 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:35:58.203427    4729 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:35:58.205781    4729 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:35:58.209349    4729 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:35:58.212417    4729 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:35:58.212455    4729 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0818 12:35:58.212463    4729 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0818 12:35:58.212493    4729 start.go:340] cluster config:
	{Name:custom-flannel-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:35:58.216001    4729 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:35:58.219383    4729 out.go:177] * Starting "custom-flannel-937000" primary control-plane node in "custom-flannel-937000" cluster
	I0818 12:35:58.223361    4729 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:35:58.223375    4729 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:35:58.223382    4729 cache.go:56] Caching tarball of preloaded images
	I0818 12:35:58.223433    4729 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:35:58.223437    4729 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:35:58.223490    4729 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/custom-flannel-937000/config.json ...
	I0818 12:35:58.223499    4729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/custom-flannel-937000/config.json: {Name:mk170768173648400ae217b754eddae45a9dd517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:35:58.223772    4729 start.go:360] acquireMachinesLock for custom-flannel-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:58.223804    4729 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "custom-flannel-937000"
	I0818 12:35:58.223815    4729 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:58.223838    4729 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:35:58.231355    4729 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:35:58.246497    4729 start.go:159] libmachine.API.Create for "custom-flannel-937000" (driver="qemu2")
	I0818 12:35:58.246527    4729 client.go:168] LocalClient.Create starting
	I0818 12:35:58.246599    4729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:35:58.246629    4729 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:58.246638    4729 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:58.246673    4729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:35:58.246696    4729 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:58.246702    4729 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:58.247058    4729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:35:58.421708    4729 main.go:141] libmachine: Creating SSH key...
	I0818 12:35:58.584982    4729 main.go:141] libmachine: Creating Disk image...
	I0818 12:35:58.584991    4729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:35:58.585269    4729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2
	I0818 12:35:58.595017    4729 main.go:141] libmachine: STDOUT: 
	I0818 12:35:58.595044    4729 main.go:141] libmachine: STDERR: 
	I0818 12:35:58.595105    4729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2 +20000M
	I0818 12:35:58.603164    4729 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:35:58.603180    4729 main.go:141] libmachine: STDERR: 
	I0818 12:35:58.603197    4729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2
	I0818 12:35:58.603202    4729 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:35:58.603217    4729 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:35:58.603245    4729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:ba:25:eb:73:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2
	I0818 12:35:58.604967    4729 main.go:141] libmachine: STDOUT: 
	I0818 12:35:58.604991    4729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:35:58.605008    4729 client.go:171] duration metric: took 358.481875ms to LocalClient.Create
	I0818 12:36:00.607183    4729 start.go:128] duration metric: took 2.383347792s to createHost
	I0818 12:36:00.607247    4729 start.go:83] releasing machines lock for "custom-flannel-937000", held for 2.383469334s
	W0818 12:36:00.607329    4729 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:00.618378    4729 out.go:177] * Deleting "custom-flannel-937000" in qemu2 ...
	W0818 12:36:00.644480    4729 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:00.644514    4729 start.go:729] Will try again in 5 seconds ...
	I0818 12:36:05.646774    4729 start.go:360] acquireMachinesLock for custom-flannel-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:05.647359    4729 start.go:364] duration metric: took 467.333µs to acquireMachinesLock for "custom-flannel-937000"
	I0818 12:36:05.647510    4729 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:05.647792    4729 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:05.653526    4729 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:05.705471    4729 start.go:159] libmachine.API.Create for "custom-flannel-937000" (driver="qemu2")
	I0818 12:36:05.705522    4729 client.go:168] LocalClient.Create starting
	I0818 12:36:05.705646    4729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:05.705730    4729 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:05.705749    4729 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:05.705807    4729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:05.705857    4729 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:05.705868    4729 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:05.706673    4729 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:05.868570    4729 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:05.939124    4729 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:05.939133    4729 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:05.939349    4729 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2
	I0818 12:36:05.949328    4729 main.go:141] libmachine: STDOUT: 
	I0818 12:36:05.949353    4729 main.go:141] libmachine: STDERR: 
	I0818 12:36:05.949406    4729 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2 +20000M
	I0818 12:36:05.957933    4729 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:05.957949    4729 main.go:141] libmachine: STDERR: 
	I0818 12:36:05.957960    4729 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2
	I0818 12:36:05.957965    4729 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:05.957987    4729 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:05.958011    4729 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:bb:34:12:05:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/custom-flannel-937000/disk.qcow2
	I0818 12:36:05.959737    4729 main.go:141] libmachine: STDOUT: 
	I0818 12:36:05.959764    4729 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:05.959778    4729 client.go:171] duration metric: took 254.254916ms to LocalClient.Create
	I0818 12:36:07.961866    4729 start.go:128] duration metric: took 2.3140845s to createHost
	I0818 12:36:07.961900    4729 start.go:83] releasing machines lock for "custom-flannel-937000", held for 2.314551458s
	W0818 12:36:07.962130    4729 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:07.978488    4729 out.go:201] 
	W0818 12:36:07.982509    4729 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:36:07.982527    4729 out.go:270] * 
	* 
	W0818 12:36:07.983777    4729 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:36:07.997435    4729 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.931838083s)

                                                
                                                
-- stdout --
	* [false-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-937000" primary control-plane node in "false-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:36:10.389357    4846 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:36:10.389494    4846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:10.389497    4846 out.go:358] Setting ErrFile to fd 2...
	I0818 12:36:10.389499    4846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:10.389665    4846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:36:10.390806    4846 out.go:352] Setting JSON to false
	I0818 12:36:10.407120    4846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3940,"bootTime":1724005830,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:36:10.407204    4846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:36:10.413075    4846 out.go:177] * [false-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:36:10.421100    4846 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:36:10.421158    4846 notify.go:220] Checking for updates...
	I0818 12:36:10.427051    4846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:36:10.430088    4846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:36:10.433065    4846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:36:10.436096    4846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:36:10.439050    4846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:36:10.442285    4846 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:36:10.442359    4846 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:36:10.442406    4846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:36:10.447061    4846 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:36:10.453969    4846 start.go:297] selected driver: qemu2
	I0818 12:36:10.453976    4846 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:36:10.453982    4846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:36:10.456276    4846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:36:10.459098    4846 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:36:10.462123    4846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:36:10.462158    4846 cni.go:84] Creating CNI manager for "false"
	I0818 12:36:10.462190    4846 start.go:340] cluster config:
	{Name:false-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:36:10.465887    4846 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:36:10.473023    4846 out.go:177] * Starting "false-937000" primary control-plane node in "false-937000" cluster
	I0818 12:36:10.477052    4846 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:36:10.477070    4846 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:36:10.477081    4846 cache.go:56] Caching tarball of preloaded images
	I0818 12:36:10.477150    4846 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:36:10.477156    4846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:36:10.477242    4846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/false-937000/config.json ...
	I0818 12:36:10.477253    4846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/false-937000/config.json: {Name:mk6b8213a6fd67c26203c0b4323deeabeb8c7f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:36:10.477465    4846 start.go:360] acquireMachinesLock for false-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:10.477500    4846 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "false-937000"
	I0818 12:36:10.477512    4846 start.go:93] Provisioning new machine with config: &{Name:false-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:10.477537    4846 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:10.485059    4846 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:10.502430    4846 start.go:159] libmachine.API.Create for "false-937000" (driver="qemu2")
	I0818 12:36:10.502466    4846 client.go:168] LocalClient.Create starting
	I0818 12:36:10.502545    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:10.502577    4846 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:10.502587    4846 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:10.502624    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:10.502647    4846 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:10.502658    4846 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:10.503013    4846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:10.654819    4846 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:10.833094    4846 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:10.833102    4846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:10.833336    4846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2
	I0818 12:36:10.842798    4846 main.go:141] libmachine: STDOUT: 
	I0818 12:36:10.842816    4846 main.go:141] libmachine: STDERR: 
	I0818 12:36:10.842875    4846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2 +20000M
	I0818 12:36:10.851305    4846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:10.851323    4846 main.go:141] libmachine: STDERR: 
	I0818 12:36:10.851334    4846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2
	I0818 12:36:10.851338    4846 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:10.851354    4846 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:10.851390    4846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c6:eb:49:e8:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2
	I0818 12:36:10.853108    4846 main.go:141] libmachine: STDOUT: 
	I0818 12:36:10.853123    4846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:10.853140    4846 client.go:171] duration metric: took 350.674708ms to LocalClient.Create
	I0818 12:36:12.855302    4846 start.go:128] duration metric: took 2.377775042s to createHost
	I0818 12:36:12.855374    4846 start.go:83] releasing machines lock for "false-937000", held for 2.377898541s
	W0818 12:36:12.855450    4846 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:12.871371    4846 out.go:177] * Deleting "false-937000" in qemu2 ...
	W0818 12:36:12.897650    4846 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:12.897693    4846 start.go:729] Will try again in 5 seconds ...
	I0818 12:36:17.898675    4846 start.go:360] acquireMachinesLock for false-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:17.899286    4846 start.go:364] duration metric: took 508.875µs to acquireMachinesLock for "false-937000"
	I0818 12:36:17.899412    4846 start.go:93] Provisioning new machine with config: &{Name:false-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:17.899777    4846 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:17.905439    4846 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:17.957389    4846 start.go:159] libmachine.API.Create for "false-937000" (driver="qemu2")
	I0818 12:36:17.957448    4846 client.go:168] LocalClient.Create starting
	I0818 12:36:17.957599    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:17.957672    4846 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:17.957689    4846 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:17.957757    4846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:17.957818    4846 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:17.957830    4846 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:17.958382    4846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:18.119386    4846 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:18.228630    4846 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:18.228637    4846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:18.228866    4846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2
	I0818 12:36:18.238281    4846 main.go:141] libmachine: STDOUT: 
	I0818 12:36:18.238302    4846 main.go:141] libmachine: STDERR: 
	I0818 12:36:18.238362    4846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2 +20000M
	I0818 12:36:18.246477    4846 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:18.246494    4846 main.go:141] libmachine: STDERR: 
	I0818 12:36:18.246504    4846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2
	I0818 12:36:18.246508    4846 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:18.246523    4846 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:18.246557    4846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c4:70:c5:e2:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/false-937000/disk.qcow2
	I0818 12:36:18.248230    4846 main.go:141] libmachine: STDOUT: 
	I0818 12:36:18.248254    4846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:18.248267    4846 client.go:171] duration metric: took 290.819ms to LocalClient.Create
	I0818 12:36:20.250453    4846 start.go:128] duration metric: took 2.350674917s to createHost
	I0818 12:36:20.250529    4846 start.go:83] releasing machines lock for "false-937000", held for 2.351210375s
	W0818 12:36:20.250999    4846 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:20.263711    4846 out.go:201] 
	W0818 12:36:20.267790    4846 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:36:20.267815    4846 out.go:270] * 
	* 
	W0818 12:36:20.270820    4846 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:36:20.279665    4846 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.965651333s)

                                                
                                                
-- stdout --
	* [enable-default-cni-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-937000" primary control-plane node in "enable-default-cni-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:36:22.528422    4955 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:36:22.528556    4955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:22.528560    4955 out.go:358] Setting ErrFile to fd 2...
	I0818 12:36:22.528562    4955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:22.528695    4955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:36:22.529737    4955 out.go:352] Setting JSON to false
	I0818 12:36:22.546140    4955 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3952,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:36:22.546204    4955 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:36:22.552085    4955 out.go:177] * [enable-default-cni-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:36:22.560119    4955 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:36:22.560217    4955 notify.go:220] Checking for updates...
	I0818 12:36:22.570063    4955 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:36:22.574052    4955 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:36:22.577070    4955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:36:22.580098    4955 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:36:22.583067    4955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:36:22.586457    4955 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:36:22.586522    4955 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:36:22.586564    4955 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:36:22.590060    4955 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:36:22.597011    4955 start.go:297] selected driver: qemu2
	I0818 12:36:22.597016    4955 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:36:22.597021    4955 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:36:22.599207    4955 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:36:22.603145    4955 out.go:177] * Automatically selected the socket_vmnet network
	E0818 12:36:22.606140    4955 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0818 12:36:22.606152    4955 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:36:22.606198    4955 cni.go:84] Creating CNI manager for "bridge"
	I0818 12:36:22.606202    4955 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:36:22.606232    4955 start.go:340] cluster config:
	{Name:enable-default-cni-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:36:22.609657    4955 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:36:22.618036    4955 out.go:177] * Starting "enable-default-cni-937000" primary control-plane node in "enable-default-cni-937000" cluster
	I0818 12:36:22.622035    4955 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:36:22.622052    4955 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:36:22.622058    4955 cache.go:56] Caching tarball of preloaded images
	I0818 12:36:22.622111    4955 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:36:22.622116    4955 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:36:22.622169    4955 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/enable-default-cni-937000/config.json ...
	I0818 12:36:22.622180    4955 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/enable-default-cni-937000/config.json: {Name:mkaf511d58f74994f618946bec73cf41a0f71a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:36:22.622512    4955 start.go:360] acquireMachinesLock for enable-default-cni-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:22.622542    4955 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "enable-default-cni-937000"
	I0818 12:36:22.622553    4955 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:22.622585    4955 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:22.631112    4955 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:22.646496    4955 start.go:159] libmachine.API.Create for "enable-default-cni-937000" (driver="qemu2")
	I0818 12:36:22.646524    4955 client.go:168] LocalClient.Create starting
	I0818 12:36:22.646588    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:22.646617    4955 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:22.646625    4955 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:22.646665    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:22.646689    4955 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:22.646699    4955 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:22.647037    4955 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:22.801731    4955 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:23.004088    4955 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:23.004100    4955 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:23.004367    4955 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2
	I0818 12:36:23.015030    4955 main.go:141] libmachine: STDOUT: 
	I0818 12:36:23.015054    4955 main.go:141] libmachine: STDERR: 
	I0818 12:36:23.015140    4955 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2 +20000M
	I0818 12:36:23.024395    4955 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:23.024425    4955 main.go:141] libmachine: STDERR: 
	I0818 12:36:23.024450    4955 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2
	I0818 12:36:23.024455    4955 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:23.024469    4955 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:23.024506    4955 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c2:93:23:24:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2
	I0818 12:36:23.026498    4955 main.go:141] libmachine: STDOUT: 
	I0818 12:36:23.026525    4955 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:23.026561    4955 client.go:171] duration metric: took 380.034416ms to LocalClient.Create
	I0818 12:36:25.028730    4955 start.go:128] duration metric: took 2.40615325s to createHost
	I0818 12:36:25.028795    4955 start.go:83] releasing machines lock for "enable-default-cni-937000", held for 2.406278167s
	W0818 12:36:25.028939    4955 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:25.035810    4955 out.go:177] * Deleting "enable-default-cni-937000" in qemu2 ...
	W0818 12:36:25.064953    4955 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:25.064981    4955 start.go:729] Will try again in 5 seconds ...
	I0818 12:36:30.067096    4955 start.go:360] acquireMachinesLock for enable-default-cni-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:30.067750    4955 start.go:364] duration metric: took 554.792µs to acquireMachinesLock for "enable-default-cni-937000"
	I0818 12:36:30.067908    4955 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:30.068127    4955 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:30.076733    4955 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:30.117149    4955 start.go:159] libmachine.API.Create for "enable-default-cni-937000" (driver="qemu2")
	I0818 12:36:30.117206    4955 client.go:168] LocalClient.Create starting
	I0818 12:36:30.117330    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:30.117387    4955 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:30.117403    4955 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:30.117464    4955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:30.117506    4955 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:30.117516    4955 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:30.118017    4955 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:30.276516    4955 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:30.395320    4955 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:30.395326    4955 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:30.395535    4955 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2
	I0818 12:36:30.404961    4955 main.go:141] libmachine: STDOUT: 
	I0818 12:36:30.404977    4955 main.go:141] libmachine: STDERR: 
	I0818 12:36:30.405023    4955 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2 +20000M
	I0818 12:36:30.413371    4955 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:30.413385    4955 main.go:141] libmachine: STDERR: 
	I0818 12:36:30.413398    4955 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2
	I0818 12:36:30.413403    4955 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:30.413416    4955 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:30.413450    4955 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:9e:14:18:9a:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/enable-default-cni-937000/disk.qcow2
	I0818 12:36:30.415113    4955 main.go:141] libmachine: STDOUT: 
	I0818 12:36:30.415128    4955 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:30.415141    4955 client.go:171] duration metric: took 297.933ms to LocalClient.Create
	I0818 12:36:32.417333    4955 start.go:128] duration metric: took 2.349195208s to createHost
	I0818 12:36:32.417441    4955 start.go:83] releasing machines lock for "enable-default-cni-937000", held for 2.349688917s
	W0818 12:36:32.417897    4955 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:32.433922    4955 out.go:201] 
	W0818 12:36:32.437102    4955 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:36:32.437164    4955 out.go:270] * 
	* 
	W0818 12:36:32.439828    4955 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:36:32.450902    4955 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.768914792s)

                                                
                                                
-- stdout --
	* [flannel-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-937000" primary control-plane node in "flannel-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:36:34.661651    5064 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:36:34.661776    5064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:34.661780    5064 out.go:358] Setting ErrFile to fd 2...
	I0818 12:36:34.661782    5064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:34.661906    5064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:36:34.663028    5064 out.go:352] Setting JSON to false
	I0818 12:36:34.679385    5064 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3964,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:36:34.679455    5064 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:36:34.686715    5064 out.go:177] * [flannel-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:36:34.694555    5064 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:36:34.694588    5064 notify.go:220] Checking for updates...
	I0818 12:36:34.701466    5064 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:36:34.704445    5064 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:36:34.709487    5064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:36:34.710923    5064 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:36:34.714489    5064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:36:34.717952    5064 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:36:34.718028    5064 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:36:34.718073    5064 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:36:34.722333    5064 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:36:34.729503    5064 start.go:297] selected driver: qemu2
	I0818 12:36:34.729509    5064 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:36:34.729514    5064 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:36:34.731671    5064 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:36:34.734533    5064 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:36:34.737498    5064 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:36:34.737515    5064 cni.go:84] Creating CNI manager for "flannel"
	I0818 12:36:34.737518    5064 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0818 12:36:34.737544    5064 start.go:340] cluster config:
	{Name:flannel-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:36:34.740988    5064 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:36:34.747448    5064 out.go:177] * Starting "flannel-937000" primary control-plane node in "flannel-937000" cluster
	I0818 12:36:34.751515    5064 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:36:34.751531    5064 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:36:34.751541    5064 cache.go:56] Caching tarball of preloaded images
	I0818 12:36:34.751612    5064 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:36:34.751618    5064 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:36:34.751684    5064 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/flannel-937000/config.json ...
	I0818 12:36:34.751695    5064 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/flannel-937000/config.json: {Name:mk907ba90bef5b7ed7ba4dc8a8a25eab71312f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:36:34.752024    5064 start.go:360] acquireMachinesLock for flannel-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:34.752054    5064 start.go:364] duration metric: took 24.667µs to acquireMachinesLock for "flannel-937000"
	I0818 12:36:34.752065    5064 start.go:93] Provisioning new machine with config: &{Name:flannel-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:34.752087    5064 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:34.759504    5064 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:34.774649    5064 start.go:159] libmachine.API.Create for "flannel-937000" (driver="qemu2")
	I0818 12:36:34.774678    5064 client.go:168] LocalClient.Create starting
	I0818 12:36:34.774742    5064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:34.774772    5064 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:34.774784    5064 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:34.774822    5064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:34.774845    5064 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:34.774855    5064 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:34.775299    5064 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:34.927502    5064 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:34.958950    5064 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:34.958955    5064 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:34.959164    5064 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2
	I0818 12:36:34.968419    5064 main.go:141] libmachine: STDOUT: 
	I0818 12:36:34.968439    5064 main.go:141] libmachine: STDERR: 
	I0818 12:36:34.968483    5064 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2 +20000M
	I0818 12:36:34.976509    5064 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:34.976530    5064 main.go:141] libmachine: STDERR: 
	I0818 12:36:34.976541    5064 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2
	I0818 12:36:34.976545    5064 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:34.976556    5064 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:34.976585    5064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:48:38:8f:a9:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2
	I0818 12:36:34.978287    5064 main.go:141] libmachine: STDOUT: 
	I0818 12:36:34.978303    5064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:34.978320    5064 client.go:171] duration metric: took 203.639792ms to LocalClient.Create
	I0818 12:36:36.980531    5064 start.go:128] duration metric: took 2.228446625s to createHost
	I0818 12:36:36.980607    5064 start.go:83] releasing machines lock for "flannel-937000", held for 2.228575667s
	W0818 12:36:36.980681    5064 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:36.998190    5064 out.go:177] * Deleting "flannel-937000" in qemu2 ...
	W0818 12:36:37.025931    5064 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:37.025968    5064 start.go:729] Will try again in 5 seconds ...
	I0818 12:36:42.028087    5064 start.go:360] acquireMachinesLock for flannel-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:42.028598    5064 start.go:364] duration metric: took 423.458µs to acquireMachinesLock for "flannel-937000"
	I0818 12:36:42.028749    5064 start.go:93] Provisioning new machine with config: &{Name:flannel-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:42.029130    5064 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:42.038654    5064 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:42.088569    5064 start.go:159] libmachine.API.Create for "flannel-937000" (driver="qemu2")
	I0818 12:36:42.088629    5064 client.go:168] LocalClient.Create starting
	I0818 12:36:42.088749    5064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:42.088808    5064 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:42.088850    5064 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:42.088908    5064 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:42.088953    5064 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:42.088968    5064 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:42.089478    5064 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:42.250160    5064 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:42.341787    5064 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:42.341795    5064 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:42.342013    5064 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2
	I0818 12:36:42.351394    5064 main.go:141] libmachine: STDOUT: 
	I0818 12:36:42.351409    5064 main.go:141] libmachine: STDERR: 
	I0818 12:36:42.351458    5064 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2 +20000M
	I0818 12:36:42.359426    5064 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:42.359442    5064 main.go:141] libmachine: STDERR: 
	I0818 12:36:42.359457    5064 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2
	I0818 12:36:42.359462    5064 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:42.359471    5064 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:42.359507    5064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a8:db:e1:93:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/flannel-937000/disk.qcow2
	I0818 12:36:42.361243    5064 main.go:141] libmachine: STDOUT: 
	I0818 12:36:42.361265    5064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:42.361278    5064 client.go:171] duration metric: took 272.6465ms to LocalClient.Create
	I0818 12:36:44.363455    5064 start.go:128] duration metric: took 2.3343195s to createHost
	I0818 12:36:44.363535    5064 start.go:83] releasing machines lock for "flannel-937000", held for 2.334944375s
	W0818 12:36:44.363985    5064 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:44.373874    5064 out.go:201] 
	W0818 12:36:44.377677    5064 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:36:44.377701    5064 out.go:270] * 
	* 
	W0818 12:36:44.380359    5064 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:36:44.389665    5064 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.916221375s)

                                                
                                                
-- stdout --
	* [bridge-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-937000" primary control-plane node in "bridge-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:36:46.827115    5181 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:36:46.827238    5181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:46.827242    5181 out.go:358] Setting ErrFile to fd 2...
	I0818 12:36:46.827245    5181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:46.827388    5181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:36:46.828731    5181 out.go:352] Setting JSON to false
	I0818 12:36:46.845846    5181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3976,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:36:46.845937    5181 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:36:46.852859    5181 out.go:177] * [bridge-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:36:46.858607    5181 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:36:46.858672    5181 notify.go:220] Checking for updates...
	I0818 12:36:46.865903    5181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:36:46.867324    5181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:36:46.870906    5181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:36:46.873901    5181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:36:46.876955    5181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:36:46.880290    5181 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:36:46.880355    5181 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:36:46.880401    5181 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:36:46.884916    5181 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:36:46.891807    5181 start.go:297] selected driver: qemu2
	I0818 12:36:46.891814    5181 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:36:46.891820    5181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:36:46.893952    5181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:36:46.897908    5181 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:36:46.900948    5181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:36:46.900966    5181 cni.go:84] Creating CNI manager for "bridge"
	I0818 12:36:46.900971    5181 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:36:46.901003    5181 start.go:340] cluster config:
	{Name:bridge-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:36:46.904377    5181 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:36:46.912939    5181 out.go:177] * Starting "bridge-937000" primary control-plane node in "bridge-937000" cluster
	I0818 12:36:46.916869    5181 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:36:46.916887    5181 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:36:46.916897    5181 cache.go:56] Caching tarball of preloaded images
	I0818 12:36:46.916965    5181 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:36:46.916978    5181 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:36:46.917046    5181 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/bridge-937000/config.json ...
	I0818 12:36:46.917057    5181 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/bridge-937000/config.json: {Name:mk8eec75809fd4ec1f3eb9be20e5c0511552a96f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:36:46.917262    5181 start.go:360] acquireMachinesLock for bridge-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:46.917294    5181 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "bridge-937000"
	I0818 12:36:46.917306    5181 start.go:93] Provisioning new machine with config: &{Name:bridge-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:46.917333    5181 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:46.924822    5181 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:46.940410    5181 start.go:159] libmachine.API.Create for "bridge-937000" (driver="qemu2")
	I0818 12:36:46.940436    5181 client.go:168] LocalClient.Create starting
	I0818 12:36:46.940500    5181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:46.940532    5181 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:46.940542    5181 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:46.940581    5181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:46.940604    5181 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:46.940611    5181 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:46.940951    5181 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:47.092249    5181 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:47.253797    5181 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:47.253807    5181 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:47.254068    5181 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2
	I0818 12:36:47.263888    5181 main.go:141] libmachine: STDOUT: 
	I0818 12:36:47.263907    5181 main.go:141] libmachine: STDERR: 
	I0818 12:36:47.263967    5181 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2 +20000M
	I0818 12:36:47.272320    5181 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:47.272336    5181 main.go:141] libmachine: STDERR: 
	I0818 12:36:47.272351    5181 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2
	I0818 12:36:47.272356    5181 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:47.272372    5181 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:47.272398    5181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:e7:76:f0:20:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2
	I0818 12:36:47.274133    5181 main.go:141] libmachine: STDOUT: 
	I0818 12:36:47.274149    5181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:47.274166    5181 client.go:171] duration metric: took 333.729625ms to LocalClient.Create
	I0818 12:36:49.276306    5181 start.go:128] duration metric: took 2.358984s to createHost
	I0818 12:36:49.276369    5181 start.go:83] releasing machines lock for "bridge-937000", held for 2.35910025s
	W0818 12:36:49.276452    5181 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:49.283349    5181 out.go:177] * Deleting "bridge-937000" in qemu2 ...
	W0818 12:36:49.310178    5181 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:49.310197    5181 start.go:729] Will try again in 5 seconds ...
	I0818 12:36:54.310464    5181 start.go:360] acquireMachinesLock for bridge-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:54.310596    5181 start.go:364] duration metric: took 97.083µs to acquireMachinesLock for "bridge-937000"
	I0818 12:36:54.310611    5181 start.go:93] Provisioning new machine with config: &{Name:bridge-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:54.310663    5181 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:54.319027    5181 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:54.335592    5181 start.go:159] libmachine.API.Create for "bridge-937000" (driver="qemu2")
	I0818 12:36:54.335630    5181 client.go:168] LocalClient.Create starting
	I0818 12:36:54.335709    5181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:54.335750    5181 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:54.335760    5181 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:54.335796    5181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:54.335818    5181 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:54.335824    5181 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:54.336123    5181 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:54.488787    5181 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:54.646078    5181 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:54.646087    5181 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:54.646331    5181 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2
	I0818 12:36:54.656219    5181 main.go:141] libmachine: STDOUT: 
	I0818 12:36:54.656240    5181 main.go:141] libmachine: STDERR: 
	I0818 12:36:54.656288    5181 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2 +20000M
	I0818 12:36:54.664545    5181 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:54.664561    5181 main.go:141] libmachine: STDERR: 
	I0818 12:36:54.664572    5181 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2
	I0818 12:36:54.664577    5181 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:54.664589    5181 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:54.664636    5181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:4d:03:9c:50:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/bridge-937000/disk.qcow2
	I0818 12:36:54.666334    5181 main.go:141] libmachine: STDOUT: 
	I0818 12:36:54.666353    5181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:54.666366    5181 client.go:171] duration metric: took 330.7365ms to LocalClient.Create
	I0818 12:36:56.668560    5181 start.go:128] duration metric: took 2.357892708s to createHost
	I0818 12:36:56.668649    5181 start.go:83] releasing machines lock for "bridge-937000", held for 2.358075625s
	W0818 12:36:56.668995    5181 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:36:56.683652    5181 out.go:201] 
	W0818 12:36:56.687566    5181 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:36:56.687593    5181 out.go:270] * 
	* 
	W0818 12:36:56.690129    5181 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:36:56.702563    5181 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-937000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.087425s)

                                                
                                                
-- stdout --
	* [kubenet-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-937000" primary control-plane node in "kubenet-937000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-937000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:36:58.897872    5306 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:36:58.898025    5306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:58.898028    5306 out.go:358] Setting ErrFile to fd 2...
	I0818 12:36:58.898031    5306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:36:58.898167    5306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:36:58.899264    5306 out.go:352] Setting JSON to false
	I0818 12:36:58.916057    5306 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3988,"bootTime":1724005830,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:36:58.916124    5306 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:36:58.921106    5306 out.go:177] * [kubenet-937000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:36:58.928922    5306 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:36:58.928978    5306 notify.go:220] Checking for updates...
	I0818 12:36:58.935997    5306 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:36:58.938970    5306 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:36:58.942007    5306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:36:58.945019    5306 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:36:58.947982    5306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:36:58.951280    5306 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:36:58.951349    5306 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:36:58.951399    5306 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:36:58.956003    5306 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:36:58.962944    5306 start.go:297] selected driver: qemu2
	I0818 12:36:58.962950    5306 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:36:58.962955    5306 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:36:58.965167    5306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:36:58.968007    5306 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:36:58.971026    5306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:36:58.971048    5306 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0818 12:36:58.971076    5306 start.go:340] cluster config:
	{Name:kubenet-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:36:58.974793    5306 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:36:58.981983    5306 out.go:177] * Starting "kubenet-937000" primary control-plane node in "kubenet-937000" cluster
	I0818 12:36:58.985975    5306 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:36:58.985997    5306 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:36:58.986007    5306 cache.go:56] Caching tarball of preloaded images
	I0818 12:36:58.986079    5306 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:36:58.986090    5306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:36:58.986157    5306 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/kubenet-937000/config.json ...
	I0818 12:36:58.986168    5306 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/kubenet-937000/config.json: {Name:mkefa9a91b7beebaa63440e0b6ebb6a54a223171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:36:58.986377    5306 start.go:360] acquireMachinesLock for kubenet-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:36:58.986409    5306 start.go:364] duration metric: took 27µs to acquireMachinesLock for "kubenet-937000"
	I0818 12:36:58.986422    5306 start.go:93] Provisioning new machine with config: &{Name:kubenet-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:36:58.986448    5306 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:36:58.993961    5306 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:36:59.010684    5306 start.go:159] libmachine.API.Create for "kubenet-937000" (driver="qemu2")
	I0818 12:36:59.010711    5306 client.go:168] LocalClient.Create starting
	I0818 12:36:59.010770    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:36:59.010802    5306 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:59.010811    5306 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:59.010848    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:36:59.010871    5306 main.go:141] libmachine: Decoding PEM data...
	I0818 12:36:59.010880    5306 main.go:141] libmachine: Parsing certificate...
	I0818 12:36:59.011213    5306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:36:59.160938    5306 main.go:141] libmachine: Creating SSH key...
	I0818 12:36:59.391300    5306 main.go:141] libmachine: Creating Disk image...
	I0818 12:36:59.391310    5306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:36:59.391567    5306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2
	I0818 12:36:59.401392    5306 main.go:141] libmachine: STDOUT: 
	I0818 12:36:59.401411    5306 main.go:141] libmachine: STDERR: 
	I0818 12:36:59.401477    5306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2 +20000M
	I0818 12:36:59.409301    5306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:36:59.409319    5306 main.go:141] libmachine: STDERR: 
	I0818 12:36:59.409341    5306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2
	I0818 12:36:59.409345    5306 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:36:59.409368    5306 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:36:59.409392    5306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:28:da:35:29:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2
	I0818 12:36:59.411057    5306 main.go:141] libmachine: STDOUT: 
	I0818 12:36:59.411073    5306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:36:59.411098    5306 client.go:171] duration metric: took 400.388291ms to LocalClient.Create
	I0818 12:37:01.413315    5306 start.go:128] duration metric: took 2.42686775s to createHost
	I0818 12:37:01.413386    5306 start.go:83] releasing machines lock for "kubenet-937000", held for 2.427002333s
	W0818 12:37:01.413454    5306 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:01.428436    5306 out.go:177] * Deleting "kubenet-937000" in qemu2 ...
	W0818 12:37:01.453907    5306 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:01.453940    5306 start.go:729] Will try again in 5 seconds ...
	I0818 12:37:06.456133    5306 start.go:360] acquireMachinesLock for kubenet-937000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:06.456657    5306 start.go:364] duration metric: took 372.291µs to acquireMachinesLock for "kubenet-937000"
	I0818 12:37:06.456779    5306 start.go:93] Provisioning new machine with config: &{Name:kubenet-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:06.457029    5306 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:06.464571    5306 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0818 12:37:06.503650    5306 start.go:159] libmachine.API.Create for "kubenet-937000" (driver="qemu2")
	I0818 12:37:06.503697    5306 client.go:168] LocalClient.Create starting
	I0818 12:37:06.503811    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:06.503875    5306 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:06.503889    5306 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:06.503949    5306 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:06.503990    5306 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:06.504002    5306 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:06.504431    5306 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:06.660857    5306 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:06.900079    5306 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:06.900091    5306 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:06.900341    5306 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2
	I0818 12:37:06.910098    5306 main.go:141] libmachine: STDOUT: 
	I0818 12:37:06.910120    5306 main.go:141] libmachine: STDERR: 
	I0818 12:37:06.910188    5306 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2 +20000M
	I0818 12:37:06.918499    5306 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:06.918517    5306 main.go:141] libmachine: STDERR: 
	I0818 12:37:06.918532    5306 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2
	I0818 12:37:06.918536    5306 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:06.918552    5306 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:06.918587    5306 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:8e:1c:c1:d7:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/kubenet-937000/disk.qcow2
	I0818 12:37:06.920292    5306 main.go:141] libmachine: STDOUT: 
	I0818 12:37:06.920313    5306 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:06.920331    5306 client.go:171] duration metric: took 416.635375ms to LocalClient.Create
	I0818 12:37:08.921690    5306 start.go:128] duration metric: took 2.464656084s to createHost
	I0818 12:37:08.921735    5306 start.go:83] releasing machines lock for "kubenet-937000", held for 2.465095583s
	W0818 12:37:08.921857    5306 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-937000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:08.931159    5306 out.go:201] 
	W0818 12:37:08.935159    5306 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:08.935192    5306 out.go:270] * 
	* 
	W0818 12:37:08.936231    5306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:08.947132    5306 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-088000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-088000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.072354458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-088000" primary control-plane node in "old-k8s-version-088000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-088000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:11.336403    5419 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:11.336515    5419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:11.336518    5419 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:11.336520    5419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:11.336672    5419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:11.337986    5419 out.go:352] Setting JSON to false
	I0818 12:37:11.354917    5419 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4001,"bootTime":1724005830,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:37:11.355007    5419 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:11.359943    5419 out.go:177] * [old-k8s-version-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:37:11.368112    5419 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:11.368135    5419 notify.go:220] Checking for updates...
	I0818 12:37:11.375052    5419 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:37:11.378087    5419 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:37:11.381025    5419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:11.384045    5419 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:37:11.387075    5419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:11.388723    5419 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:11.388792    5419 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:37:11.388842    5419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:11.393079    5419 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:37:11.399891    5419 start.go:297] selected driver: qemu2
	I0818 12:37:11.399898    5419 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:37:11.399904    5419 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:11.401997    5419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:37:11.405080    5419 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:37:11.408122    5419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:37:11.408158    5419 cni.go:84] Creating CNI manager for ""
	I0818 12:37:11.408167    5419 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0818 12:37:11.408198    5419 start.go:340] cluster config:
	{Name:old-k8s-version-088000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:11.411510    5419 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:11.419054    5419 out.go:177] * Starting "old-k8s-version-088000" primary control-plane node in "old-k8s-version-088000" cluster
	I0818 12:37:11.423080    5419 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 12:37:11.423093    5419 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0818 12:37:11.423098    5419 cache.go:56] Caching tarball of preloaded images
	I0818 12:37:11.423146    5419 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:37:11.423151    5419 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0818 12:37:11.423223    5419 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/old-k8s-version-088000/config.json ...
	I0818 12:37:11.423235    5419 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/old-k8s-version-088000/config.json: {Name:mk485f5e17b8e091bd30191c1a4d7437c1476f34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:37:11.423557    5419 start.go:360] acquireMachinesLock for old-k8s-version-088000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:11.423589    5419 start.go:364] duration metric: took 24.834µs to acquireMachinesLock for "old-k8s-version-088000"
	I0818 12:37:11.423600    5419 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:11.423630    5419 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:11.432020    5419 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:11.446934    5419 start.go:159] libmachine.API.Create for "old-k8s-version-088000" (driver="qemu2")
	I0818 12:37:11.446953    5419 client.go:168] LocalClient.Create starting
	I0818 12:37:11.447018    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:11.447050    5419 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:11.447058    5419 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:11.447095    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:11.447118    5419 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:11.447128    5419 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:11.447529    5419 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:11.597472    5419 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:11.669287    5419 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:11.669292    5419 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:11.669513    5419 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:11.679040    5419 main.go:141] libmachine: STDOUT: 
	I0818 12:37:11.679061    5419 main.go:141] libmachine: STDERR: 
	I0818 12:37:11.679118    5419 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2 +20000M
	I0818 12:37:11.687162    5419 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:11.687177    5419 main.go:141] libmachine: STDERR: 
	I0818 12:37:11.687193    5419 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:11.687196    5419 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:11.687212    5419 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:11.687252    5419 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:b3:e6:0f:5c:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:11.689089    5419 main.go:141] libmachine: STDOUT: 
	I0818 12:37:11.689104    5419 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:11.689126    5419 client.go:171] duration metric: took 242.170084ms to LocalClient.Create
	I0818 12:37:13.691224    5419 start.go:128] duration metric: took 2.267607625s to createHost
	I0818 12:37:13.691280    5419 start.go:83] releasing machines lock for "old-k8s-version-088000", held for 2.26770775s
	W0818 12:37:13.691343    5419 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:13.701280    5419 out.go:177] * Deleting "old-k8s-version-088000" in qemu2 ...
	W0818 12:37:13.725405    5419 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:13.725425    5419 start.go:729] Will try again in 5 seconds ...
	I0818 12:37:18.727654    5419 start.go:360] acquireMachinesLock for old-k8s-version-088000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:18.728218    5419 start.go:364] duration metric: took 449.5µs to acquireMachinesLock for "old-k8s-version-088000"
	I0818 12:37:18.728371    5419 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:18.728788    5419 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:18.734329    5419 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:18.784164    5419 start.go:159] libmachine.API.Create for "old-k8s-version-088000" (driver="qemu2")
	I0818 12:37:18.784210    5419 client.go:168] LocalClient.Create starting
	I0818 12:37:18.784335    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:18.784402    5419 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:18.784420    5419 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:18.784483    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:18.784529    5419 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:18.784542    5419 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:18.785176    5419 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:18.947642    5419 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:19.316907    5419 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:19.316920    5419 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:19.317142    5419 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:19.326715    5419 main.go:141] libmachine: STDOUT: 
	I0818 12:37:19.326741    5419 main.go:141] libmachine: STDERR: 
	I0818 12:37:19.326805    5419 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2 +20000M
	I0818 12:37:19.335429    5419 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:19.335442    5419 main.go:141] libmachine: STDERR: 
	I0818 12:37:19.335460    5419 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:19.335465    5419 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:19.335482    5419 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:19.335521    5419 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:98:f9:cc:3c:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:19.337303    5419 main.go:141] libmachine: STDOUT: 
	I0818 12:37:19.337314    5419 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:19.337329    5419 client.go:171] duration metric: took 553.121708ms to LocalClient.Create
	I0818 12:37:21.337664    5419 start.go:128] duration metric: took 2.608851584s to createHost
	I0818 12:37:21.337774    5419 start.go:83] releasing machines lock for "old-k8s-version-088000", held for 2.609565208s
	W0818 12:37:21.338043    5419 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:21.349926    5419 out.go:201] 
	W0818 12:37:21.353943    5419 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:21.353984    5419 out.go:270] * 
	* 
	W0818 12:37:21.356055    5419 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:21.366004    5419 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-088000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (63.710041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-088000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-088000 create -f testdata/busybox.yaml: exit status 1 (29.147792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-088000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-088000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (29.727292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (30.578625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-088000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-088000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-088000 describe deploy/metrics-server -n kube-system: exit status 1 (28.209708ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-088000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-088000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (29.305583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-088000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-088000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.185755042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-088000" primary control-plane node in "old-k8s-version-088000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:23.878524    5462 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:23.878647    5462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:23.878650    5462 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:23.878652    5462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:23.878796    5462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:23.879726    5462 out.go:352] Setting JSON to false
	I0818 12:37:23.896177    5462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4013,"bootTime":1724005830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:37:23.896254    5462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:23.901598    5462 out.go:177] * [old-k8s-version-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:37:23.908501    5462 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:23.908585    5462 notify.go:220] Checking for updates...
	I0818 12:37:23.916461    5462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:37:23.919487    5462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:37:23.922563    5462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:23.925559    5462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:37:23.928585    5462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:23.931876    5462 config.go:182] Loaded profile config "old-k8s-version-088000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0818 12:37:23.935538    5462 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 12:37:23.938480    5462 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:23.942566    5462 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:37:23.949504    5462 start.go:297] selected driver: qemu2
	I0818 12:37:23.949516    5462 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:23.949588    5462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:23.952109    5462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:37:23.952157    5462 cni.go:84] Creating CNI manager for ""
	I0818 12:37:23.952164    5462 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0818 12:37:23.952196    5462 start.go:340] cluster config:
	{Name:old-k8s-version-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-088000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:23.955608    5462 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:23.962513    5462 out.go:177] * Starting "old-k8s-version-088000" primary control-plane node in "old-k8s-version-088000" cluster
	I0818 12:37:23.966548    5462 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 12:37:23.966566    5462 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0818 12:37:23.966572    5462 cache.go:56] Caching tarball of preloaded images
	I0818 12:37:23.966629    5462 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:37:23.966635    5462 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0818 12:37:23.966686    5462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/old-k8s-version-088000/config.json ...
	I0818 12:37:23.967113    5462 start.go:360] acquireMachinesLock for old-k8s-version-088000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:23.967139    5462 start.go:364] duration metric: took 20.125µs to acquireMachinesLock for "old-k8s-version-088000"
	I0818 12:37:23.967148    5462 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:37:23.967154    5462 fix.go:54] fixHost starting: 
	I0818 12:37:23.967266    5462 fix.go:112] recreateIfNeeded on old-k8s-version-088000: state=Stopped err=<nil>
	W0818 12:37:23.967274    5462 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:37:23.971485    5462 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-088000" ...
	I0818 12:37:23.979332    5462 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:23.979362    5462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:98:f9:cc:3c:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:23.981340    5462 main.go:141] libmachine: STDOUT: 
	I0818 12:37:23.981356    5462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:23.981384    5462 fix.go:56] duration metric: took 14.22975ms for fixHost
	I0818 12:37:23.981388    5462 start.go:83] releasing machines lock for "old-k8s-version-088000", held for 14.245416ms
	W0818 12:37:23.981393    5462 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:23.981421    5462 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:23.981425    5462 start.go:729] Will try again in 5 seconds ...
	I0818 12:37:28.982582    5462 start.go:360] acquireMachinesLock for old-k8s-version-088000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:28.982905    5462 start.go:364] duration metric: took 248.583µs to acquireMachinesLock for "old-k8s-version-088000"
	I0818 12:37:28.982986    5462 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:37:28.982996    5462 fix.go:54] fixHost starting: 
	I0818 12:37:28.983322    5462 fix.go:112] recreateIfNeeded on old-k8s-version-088000: state=Stopped err=<nil>
	W0818 12:37:28.983333    5462 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:37:28.988506    5462 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-088000" ...
	I0818 12:37:28.995649    5462 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:28.995756    5462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:98:f9:cc:3c:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/old-k8s-version-088000/disk.qcow2
	I0818 12:37:29.000392    5462 main.go:141] libmachine: STDOUT: 
	I0818 12:37:29.000425    5462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:29.000472    5462 fix.go:56] duration metric: took 17.468416ms for fixHost
	I0818 12:37:29.000481    5462 start.go:83] releasing machines lock for "old-k8s-version-088000", held for 17.564666ms
	W0818 12:37:29.000564    5462 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:29.007603    5462 out.go:201] 
	W0818 12:37:29.011637    5462 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:29.011658    5462 out.go:270] * 
	* 
	W0818 12:37:29.013066    5462 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:29.022615    5462 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-088000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (51.79725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-088000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (31.631791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-088000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-088000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-088000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.456667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-088000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-088000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (30.462958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-088000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (30.440375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-088000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-088000 --alsologtostderr -v=1: exit status 83 (40.939375ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-088000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-088000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:29.279204    5485 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:29.280283    5485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:29.280287    5485 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:29.280289    5485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:29.280434    5485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:29.280633    5485 out.go:352] Setting JSON to false
	I0818 12:37:29.280642    5485 mustload.go:65] Loading cluster: old-k8s-version-088000
	I0818 12:37:29.280812    5485 config.go:182] Loaded profile config "old-k8s-version-088000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0818 12:37:29.285541    5485 out.go:177] * The control-plane node old-k8s-version-088000 host is not running: state=Stopped
	I0818 12:37:29.288600    5485 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-088000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-088000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (29.268375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (29.469166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.027016375s)

                                                
                                                
-- stdout --
	* [no-preload-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-972000" primary control-plane node in "no-preload-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:29.596469    5502 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:29.596583    5502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:29.596587    5502 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:29.596589    5502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:29.596727    5502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:29.597870    5502 out.go:352] Setting JSON to false
	I0818 12:37:29.614438    5502 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4019,"bootTime":1724005830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:37:29.614516    5502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:29.618024    5502 out.go:177] * [no-preload-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:37:29.624014    5502 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:29.624059    5502 notify.go:220] Checking for updates...
	I0818 12:37:29.631134    5502 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:37:29.633992    5502 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:37:29.637023    5502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:29.639997    5502 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:37:29.642983    5502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:29.646365    5502 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:29.646433    5502 config.go:182] Loaded profile config "stopped-upgrade-521000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0818 12:37:29.646478    5502 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:29.650982    5502 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:37:29.657933    5502 start.go:297] selected driver: qemu2
	I0818 12:37:29.657942    5502 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:37:29.657949    5502 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:29.660146    5502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:37:29.662975    5502 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:37:29.666040    5502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:37:29.666092    5502 cni.go:84] Creating CNI manager for ""
	I0818 12:37:29.666110    5502 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:37:29.666117    5502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:37:29.666166    5502 start.go:340] cluster config:
	{Name:no-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:29.669783    5502 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.676958    5502 out.go:177] * Starting "no-preload-972000" primary control-plane node in "no-preload-972000" cluster
	I0818 12:37:29.680953    5502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:37:29.681017    5502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/no-preload-972000/config.json ...
	I0818 12:37:29.681032    5502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/no-preload-972000/config.json: {Name:mke362d7caa0474af0d1ed41c15b17865b866842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:37:29.681047    5502 cache.go:107] acquiring lock: {Name:mkcaf27b6b9250fba1720aabd7d5e4375ecdab25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681050    5502 cache.go:107] acquiring lock: {Name:mk01c4e00ddb6df03884ff6c5dda909f048df9d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681106    5502 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0818 12:37:29.681114    5502 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 70.959µs
	I0818 12:37:29.681120    5502 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0818 12:37:29.681140    5502 cache.go:107] acquiring lock: {Name:mka98384f5da9b309f3d131d0f3c72e94440a1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681174    5502 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 12:37:29.681188    5502 cache.go:107] acquiring lock: {Name:mk4c48751d0bfca27c6cdc46d730ee66cc3fedca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681245    5502 cache.go:107] acquiring lock: {Name:mkbc85d8f06916af46374711ac99b55f24a4f989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681218    5502 cache.go:107] acquiring lock: {Name:mk542808d05c3ef0868c3c5f9085bfeb01ea328c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681249    5502 cache.go:107] acquiring lock: {Name:mk9ef2d4efe2899c666e3303a53ffa46536fdc51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681294    5502 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 12:37:29.681213    5502 cache.go:107] acquiring lock: {Name:mk8eadaa4e5bce96a6c8848d3f84e5b03556b803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:29.681360    5502 start.go:360] acquireMachinesLock for no-preload-972000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:29.681375    5502 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 12:37:29.681392    5502 start.go:364] duration metric: took 27.916µs to acquireMachinesLock for "no-preload-972000"
	I0818 12:37:29.681413    5502 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 12:37:29.681423    5502 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 12:37:29.681406    5502 start.go:93] Provisioning new machine with config: &{Name:no-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:29.681433    5502 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:29.681562    5502 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 12:37:29.681580    5502 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 12:37:29.684977    5502 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:29.691797    5502 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 12:37:29.692628    5502 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 12:37:29.692684    5502 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 12:37:29.692715    5502 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 12:37:29.692775    5502 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 12:37:29.695369    5502 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 12:37:29.695446    5502 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 12:37:29.701036    5502 start.go:159] libmachine.API.Create for "no-preload-972000" (driver="qemu2")
	I0818 12:37:29.701053    5502 client.go:168] LocalClient.Create starting
	I0818 12:37:29.701133    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:29.701166    5502 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:29.701175    5502 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:29.701224    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:29.701249    5502 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:29.701258    5502 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:29.701621    5502 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:29.859025    5502 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:30.038546    5502 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:30.038562    5502 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:30.038816    5502 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:30.048325    5502 main.go:141] libmachine: STDOUT: 
	I0818 12:37:30.048343    5502 main.go:141] libmachine: STDERR: 
	I0818 12:37:30.048395    5502 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2 +20000M
	I0818 12:37:30.056898    5502 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:30.056913    5502 main.go:141] libmachine: STDERR: 
	I0818 12:37:30.056924    5502 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:30.056929    5502 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:30.056938    5502 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:30.056978    5502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:5d:73:0e:80:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:30.058789    5502 main.go:141] libmachine: STDOUT: 
	I0818 12:37:30.058805    5502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:30.058822    5502 client.go:171] duration metric: took 357.769375ms to LocalClient.Create
	I0818 12:37:30.133298    5502 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0818 12:37:30.133300    5502 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 12:37:30.141252    5502 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 12:37:30.153537    5502 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 12:37:30.178099    5502 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 12:37:30.185430    5502 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0818 12:37:30.202594    5502 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 12:37:30.322745    5502 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0818 12:37:30.322761    5502 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 641.547625ms
	I0818 12:37:30.322768    5502 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0818 12:37:32.059155    5502 start.go:128] duration metric: took 2.377723625s to createHost
	I0818 12:37:32.059234    5502 start.go:83] releasing machines lock for "no-preload-972000", held for 2.377865417s
	W0818 12:37:32.059302    5502 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:32.075918    5502 out.go:177] * Deleting "no-preload-972000" in qemu2 ...
	W0818 12:37:32.103639    5502 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:32.103677    5502 start.go:729] Will try again in 5 seconds ...
	I0818 12:37:32.634678    5502 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0818 12:37:32.634707    5502 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 2.953518209s
	I0818 12:37:32.634722    5502 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0818 12:37:33.431837    5502 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0818 12:37:33.431911    5502 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.750758958s
	I0818 12:37:33.431936    5502 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0818 12:37:34.458733    5502 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0818 12:37:34.458754    5502 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.777776208s
	I0818 12:37:34.458764    5502 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0818 12:37:34.638806    5502 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0818 12:37:34.638822    5502 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.957722167s
	I0818 12:37:34.638832    5502 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0818 12:37:35.079700    5502 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0818 12:37:35.079727    5502 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 5.398626708s
	I0818 12:37:35.079741    5502 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0818 12:37:37.104273    5502 start.go:360] acquireMachinesLock for no-preload-972000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:37.104794    5502 start.go:364] duration metric: took 425.5µs to acquireMachinesLock for "no-preload-972000"
	I0818 12:37:37.104940    5502 start.go:93] Provisioning new machine with config: &{Name:no-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:37.105238    5502 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:37.110958    5502 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:37.160912    5502 start.go:159] libmachine.API.Create for "no-preload-972000" (driver="qemu2")
	I0818 12:37:37.160957    5502 client.go:168] LocalClient.Create starting
	I0818 12:37:37.161093    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:37.161160    5502 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:37.161179    5502 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:37.161268    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:37.161314    5502 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:37.161331    5502 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:37.161848    5502 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:37.321184    5502 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:37.532626    5502 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:37.532636    5502 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:37.532903    5502 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:37.542752    5502 main.go:141] libmachine: STDOUT: 
	I0818 12:37:37.542783    5502 main.go:141] libmachine: STDERR: 
	I0818 12:37:37.542869    5502 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2 +20000M
	I0818 12:37:37.545890    5502 cache.go:157] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0818 12:37:37.545921    5502 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.864887209s
	I0818 12:37:37.545935    5502 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0818 12:37:37.545956    5502 cache.go:87] Successfully saved all images to host disk.
	I0818 12:37:37.551400    5502 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:37.551414    5502 main.go:141] libmachine: STDERR: 
	I0818 12:37:37.551427    5502 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:37.551434    5502 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:37.551447    5502 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:37.551480    5502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:4a:d4:4c:83:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:37.553236    5502 main.go:141] libmachine: STDOUT: 
	I0818 12:37:37.553255    5502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:37.553266    5502 client.go:171] duration metric: took 392.309792ms to LocalClient.Create
	I0818 12:37:39.555428    5502 start.go:128] duration metric: took 2.45015675s to createHost
	I0818 12:37:39.555511    5502 start.go:83] releasing machines lock for "no-preload-972000", held for 2.450723084s
	W0818 12:37:39.555920    5502 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:39.567443    5502 out.go:201] 
	W0818 12:37:39.571475    5502 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:39.571868    5502 out.go:270] * 
	* 
	W0818 12:37:39.573998    5502 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:39.582411    5502 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (60.403042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-972000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-972000 create -f testdata/busybox.yaml: exit status 1 (29.659625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-972000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-972000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (30.076375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (29.831958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-972000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-972000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-972000 describe deploy/metrics-server -n kube-system: exit status 1 (27.885625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-972000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-972000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (29.869292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.189179042s)

                                                
                                                
-- stdout --
	* [no-preload-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-972000" primary control-plane node in "no-preload-972000" cluster
	* Restarting existing qemu2 VM for "no-preload-972000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-972000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:41.878458    5574 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:41.878595    5574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:41.878599    5574 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:41.878601    5574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:41.878739    5574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:41.879759    5574 out.go:352] Setting JSON to false
	I0818 12:37:41.895905    5574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4031,"bootTime":1724005830,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:37:41.895977    5574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:41.900539    5574 out.go:177] * [no-preload-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:37:41.906463    5574 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:41.906546    5574 notify.go:220] Checking for updates...
	I0818 12:37:41.913463    5574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:37:41.916491    5574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:37:41.919458    5574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:41.922438    5574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:37:41.925406    5574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:41.928691    5574 config.go:182] Loaded profile config "no-preload-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:41.928916    5574 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:41.931408    5574 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:37:41.938421    5574 start.go:297] selected driver: qemu2
	I0818 12:37:41.938429    5574 start.go:901] validating driver "qemu2" against &{Name:no-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:41.938503    5574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:41.940563    5574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:37:41.940589    5574 cni.go:84] Creating CNI manager for ""
	I0818 12:37:41.940595    5574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:37:41.940618    5574 start.go:340] cluster config:
	{Name:no-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-972000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:41.943839    5574 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.952453    5574 out.go:177] * Starting "no-preload-972000" primary control-plane node in "no-preload-972000" cluster
	I0818 12:37:41.956470    5574 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:37:41.956558    5574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/no-preload-972000/config.json ...
	I0818 12:37:41.956603    5574 cache.go:107] acquiring lock: {Name:mkcaf27b6b9250fba1720aabd7d5e4375ecdab25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956631    5574 cache.go:107] acquiring lock: {Name:mk542808d05c3ef0868c3c5f9085bfeb01ea328c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956632    5574 cache.go:107] acquiring lock: {Name:mk4c48751d0bfca27c6cdc46d730ee66cc3fedca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956669    5574 cache.go:107] acquiring lock: {Name:mk01c4e00ddb6df03884ff6c5dda909f048df9d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956683    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0818 12:37:41.956693    5574 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.25µs
	I0818 12:37:41.956700    5574 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0818 12:37:41.956704    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0818 12:37:41.956708    5574 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 86.834µs
	I0818 12:37:41.956712    5574 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0818 12:37:41.956733    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0818 12:37:41.956767    5574 cache.go:107] acquiring lock: {Name:mka98384f5da9b309f3d131d0f3c72e94440a1b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956787    5574 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 182.667µs
	I0818 12:37:41.956795    5574 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0818 12:37:41.956787    5574 cache.go:107] acquiring lock: {Name:mk9ef2d4efe2899c666e3303a53ffa46536fdc51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956736    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0818 12:37:41.956738    5574 cache.go:107] acquiring lock: {Name:mk8eadaa4e5bce96a6c8848d3f84e5b03556b803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956769    5574 cache.go:107] acquiring lock: {Name:mkbc85d8f06916af46374711ac99b55f24a4f989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:41.956823    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0818 12:37:41.956827    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0818 12:37:41.956827    5574 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 80µs
	I0818 12:37:41.956832    5574 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0818 12:37:41.956831    5574 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 107.083µs
	I0818 12:37:41.956839    5574 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0818 12:37:41.956813    5574 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 140.291µs
	I0818 12:37:41.956844    5574 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0818 12:37:41.956858    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0818 12:37:41.956861    5574 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 139.083µs
	I0818 12:37:41.956867    5574 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0818 12:37:41.956920    5574 cache.go:115] /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0818 12:37:41.956924    5574 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 186.667µs
	I0818 12:37:41.956930    5574 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0818 12:37:41.956934    5574 cache.go:87] Successfully saved all images to host disk.
	I0818 12:37:41.956956    5574 start.go:360] acquireMachinesLock for no-preload-972000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:41.956990    5574 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "no-preload-972000"
	I0818 12:37:41.956999    5574 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:37:41.957003    5574 fix.go:54] fixHost starting: 
	I0818 12:37:41.957122    5574 fix.go:112] recreateIfNeeded on no-preload-972000: state=Stopped err=<nil>
	W0818 12:37:41.957130    5574 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:37:41.965468    5574 out.go:177] * Restarting existing qemu2 VM for "no-preload-972000" ...
	I0818 12:37:41.969438    5574 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:41.969472    5574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:4a:d4:4c:83:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:41.971369    5574 main.go:141] libmachine: STDOUT: 
	I0818 12:37:41.971390    5574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:41.971416    5574 fix.go:56] duration metric: took 14.414167ms for fixHost
	I0818 12:37:41.971419    5574 start.go:83] releasing machines lock for "no-preload-972000", held for 14.426166ms
	W0818 12:37:41.971427    5574 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:41.971464    5574 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:41.971469    5574 start.go:729] Will try again in 5 seconds ...
	I0818 12:37:46.973577    5574 start.go:360] acquireMachinesLock for no-preload-972000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:46.974008    5574 start.go:364] duration metric: took 353.125µs to acquireMachinesLock for "no-preload-972000"
	I0818 12:37:46.974149    5574 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:37:46.974175    5574 fix.go:54] fixHost starting: 
	I0818 12:37:46.974987    5574 fix.go:112] recreateIfNeeded on no-preload-972000: state=Stopped err=<nil>
	W0818 12:37:46.975012    5574 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:37:46.991570    5574 out.go:177] * Restarting existing qemu2 VM for "no-preload-972000" ...
	I0818 12:37:46.995435    5574 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:46.995622    5574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:4a:d4:4c:83:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/no-preload-972000/disk.qcow2
	I0818 12:37:47.004977    5574 main.go:141] libmachine: STDOUT: 
	I0818 12:37:47.005047    5574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:47.005132    5574 fix.go:56] duration metric: took 30.963916ms for fixHost
	I0818 12:37:47.005147    5574 start.go:83] releasing machines lock for "no-preload-972000", held for 31.11675ms
	W0818 12:37:47.005369    5574 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-972000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-972000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:47.012417    5574 out.go:201] 
	W0818 12:37:47.015517    5574 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:47.015541    5574 out.go:270] * 
	* 
	W0818 12:37:47.017959    5574 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:47.027411    5574 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (66.308416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-494000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-494000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.844600375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-494000" primary control-plane node in "default-k8s-diff-port-494000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-494000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:42.915505    5584 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:42.915646    5584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:42.915650    5584 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:42.915652    5584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:42.915774    5584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:42.916872    5584 out.go:352] Setting JSON to false
	I0818 12:37:42.932902    5584 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4032,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:37:42.932974    5584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:42.937475    5584 out.go:177] * [default-k8s-diff-port-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:37:42.945499    5584 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:42.945569    5584 notify.go:220] Checking for updates...
	I0818 12:37:42.952458    5584 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:37:42.955425    5584 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:37:42.958473    5584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:42.961441    5584 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:37:42.964373    5584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:42.967804    5584 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:42.967880    5584 config.go:182] Loaded profile config "no-preload-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:42.967927    5584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:42.972413    5584 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:37:42.979467    5584 start.go:297] selected driver: qemu2
	I0818 12:37:42.979476    5584 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:37:42.979489    5584 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:42.981762    5584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:37:42.984468    5584 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:37:42.987524    5584 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:37:42.987556    5584 cni.go:84] Creating CNI manager for ""
	I0818 12:37:42.987564    5584 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:37:42.987574    5584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:37:42.987605    5584 start.go:340] cluster config:
	{Name:default-k8s-diff-port-494000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:42.991254    5584 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:43.006426    5584 out.go:177] * Starting "default-k8s-diff-port-494000" primary control-plane node in "default-k8s-diff-port-494000" cluster
	I0818 12:37:43.010349    5584 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:37:43.010369    5584 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:37:43.010381    5584 cache.go:56] Caching tarball of preloaded images
	I0818 12:37:43.010452    5584 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:37:43.010459    5584 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:37:43.010525    5584 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/default-k8s-diff-port-494000/config.json ...
	I0818 12:37:43.010537    5584 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/default-k8s-diff-port-494000/config.json: {Name:mk6b29ea65f234fcf6718ce5b23e2442ed854764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:37:43.010779    5584 start.go:360] acquireMachinesLock for default-k8s-diff-port-494000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:43.010818    5584 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "default-k8s-diff-port-494000"
	I0818 12:37:43.010832    5584 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:43.010879    5584 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:43.014473    5584 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:43.032689    5584 start.go:159] libmachine.API.Create for "default-k8s-diff-port-494000" (driver="qemu2")
	I0818 12:37:43.032719    5584 client.go:168] LocalClient.Create starting
	I0818 12:37:43.032776    5584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:43.032809    5584 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:43.032817    5584 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:43.032855    5584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:43.032884    5584 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:43.032891    5584 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:43.033301    5584 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:43.270215    5584 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:43.330764    5584 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:43.330770    5584 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:43.330943    5584 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:37:43.343868    5584 main.go:141] libmachine: STDOUT: 
	I0818 12:37:43.343887    5584 main.go:141] libmachine: STDERR: 
	I0818 12:37:43.343938    5584 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2 +20000M
	I0818 12:37:43.351969    5584 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:43.351985    5584 main.go:141] libmachine: STDERR: 
	I0818 12:37:43.351999    5584 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:37:43.352003    5584 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:43.352011    5584 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:43.352045    5584 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:f9:64:bd:da:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:37:43.353663    5584 main.go:141] libmachine: STDOUT: 
	I0818 12:37:43.353678    5584 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:43.353706    5584 client.go:171] duration metric: took 320.977917ms to LocalClient.Create
	I0818 12:37:45.355844    5584 start.go:128] duration metric: took 2.344975208s to createHost
	I0818 12:37:45.355918    5584 start.go:83] releasing machines lock for "default-k8s-diff-port-494000", held for 2.345121708s
	W0818 12:37:45.355968    5584 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:45.363277    5584 out.go:177] * Deleting "default-k8s-diff-port-494000" in qemu2 ...
	W0818 12:37:45.400928    5584 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:45.400958    5584 start.go:729] Will try again in 5 seconds ...
	I0818 12:37:50.403046    5584 start.go:360] acquireMachinesLock for default-k8s-diff-port-494000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:50.403561    5584 start.go:364] duration metric: took 434.667µs to acquireMachinesLock for "default-k8s-diff-port-494000"
	I0818 12:37:50.403697    5584 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:50.403959    5584 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:50.413669    5584 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:50.464015    5584 start.go:159] libmachine.API.Create for "default-k8s-diff-port-494000" (driver="qemu2")
	I0818 12:37:50.464068    5584 client.go:168] LocalClient.Create starting
	I0818 12:37:50.464196    5584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:50.464262    5584 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:50.464279    5584 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:50.464341    5584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:50.464385    5584 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:50.464400    5584 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:50.465455    5584 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:50.638791    5584 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:50.667994    5584 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:50.667998    5584 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:50.668198    5584 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:37:50.677448    5584 main.go:141] libmachine: STDOUT: 
	I0818 12:37:50.677467    5584 main.go:141] libmachine: STDERR: 
	I0818 12:37:50.677532    5584 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2 +20000M
	I0818 12:37:50.685551    5584 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:50.685564    5584 main.go:141] libmachine: STDERR: 
	I0818 12:37:50.685575    5584 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:37:50.685579    5584 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:50.685594    5584 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:50.685617    5584 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e3:4c:4d:1f:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:37:50.687268    5584 main.go:141] libmachine: STDOUT: 
	I0818 12:37:50.687280    5584 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:50.687293    5584 client.go:171] duration metric: took 223.221083ms to LocalClient.Create
	I0818 12:37:52.689435    5584 start.go:128] duration metric: took 2.285478916s to createHost
	I0818 12:37:52.689534    5584 start.go:83] releasing machines lock for "default-k8s-diff-port-494000", held for 2.285980333s
	W0818 12:37:52.689889    5584 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:52.698393    5584 out.go:201] 
	W0818 12:37:52.704429    5584 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:52.704457    5584 out.go:270] * 
	* 
	W0818 12:37:52.707209    5584 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:52.716310    5584 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-494000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (65.490166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-972000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (31.213042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-972000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-972000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-972000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.975833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-972000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-972000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (29.212208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-972000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (28.714375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-972000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-972000 --alsologtostderr -v=1: exit status 83 (38.072042ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-972000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:47.292520    5606 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:47.292687    5606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:47.292690    5606 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:47.292692    5606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:47.292818    5606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:47.293054    5606 out.go:352] Setting JSON to false
	I0818 12:37:47.293065    5606 mustload.go:65] Loading cluster: no-preload-972000
	I0818 12:37:47.293250    5606 config.go:182] Loaded profile config "no-preload-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:47.297469    5606 out.go:177] * The control-plane node no-preload-972000 host is not running: state=Stopped
	I0818 12:37:47.298645    5606 out.go:177]   To start a cluster, run: "minikube start -p no-preload-972000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-972000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (28.551833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (28.724875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.060878458s)

                                                
                                                
-- stdout --
	* [newest-cni-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:47.607976    5623 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:47.608094    5623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:47.608097    5623 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:47.608099    5623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:47.608212    5623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:47.609333    5623 out.go:352] Setting JSON to false
	I0818 12:37:47.625543    5623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4037,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:37:47.625614    5623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:47.630408    5623 out.go:177] * [newest-cni-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:37:47.637453    5623 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:47.637508    5623 notify.go:220] Checking for updates...
	I0818 12:37:47.644413    5623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:37:47.647434    5623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:37:47.650401    5623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:47.653385    5623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:37:47.656404    5623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:47.659679    5623 config.go:182] Loaded profile config "default-k8s-diff-port-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:47.659737    5623 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:47.659789    5623 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:47.664366    5623 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:37:47.671451    5623 start.go:297] selected driver: qemu2
	I0818 12:37:47.671459    5623 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:37:47.671465    5623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:47.673820    5623 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0818 12:37:47.673843    5623 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0818 12:37:47.681382    5623 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:37:47.684489    5623 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0818 12:37:47.684511    5623 cni.go:84] Creating CNI manager for ""
	I0818 12:37:47.684526    5623 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:37:47.684532    5623 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:37:47.684569    5623 start.go:340] cluster config:
	{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:47.688240    5623 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:47.695419    5623 out.go:177] * Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	I0818 12:37:47.699372    5623 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:37:47.699389    5623 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:37:47.699403    5623 cache.go:56] Caching tarball of preloaded images
	I0818 12:37:47.699471    5623 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:37:47.699477    5623 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:37:47.699539    5623 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/newest-cni-384000/config.json ...
	I0818 12:37:47.699551    5623 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/newest-cni-384000/config.json: {Name:mkfaf2ec3697905101bee36fee17807cc7bff687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:37:47.699912    5623 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:47.699948    5623 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "newest-cni-384000"
	I0818 12:37:47.699964    5623 start.go:93] Provisioning new machine with config: &{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:47.699998    5623 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:47.708392    5623 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:47.726770    5623 start.go:159] libmachine.API.Create for "newest-cni-384000" (driver="qemu2")
	I0818 12:37:47.726802    5623 client.go:168] LocalClient.Create starting
	I0818 12:37:47.726869    5623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:47.726901    5623 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:47.726911    5623 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:47.726949    5623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:47.726973    5623 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:47.726980    5623 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:47.727458    5623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:47.884604    5623 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:47.999527    5623 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:47.999534    5623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:47.999897    5623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:37:48.009053    5623 main.go:141] libmachine: STDOUT: 
	I0818 12:37:48.009076    5623 main.go:141] libmachine: STDERR: 
	I0818 12:37:48.009130    5623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2 +20000M
	I0818 12:37:48.017011    5623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:48.017027    5623 main.go:141] libmachine: STDERR: 
	I0818 12:37:48.017047    5623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:37:48.017052    5623 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:48.017061    5623 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:48.017086    5623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:2c:90:76:86:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:37:48.018649    5623 main.go:141] libmachine: STDOUT: 
	I0818 12:37:48.018665    5623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:48.018683    5623 client.go:171] duration metric: took 291.880458ms to LocalClient.Create
	I0818 12:37:50.020831    5623 start.go:128] duration metric: took 2.320843792s to createHost
	I0818 12:37:50.020877    5623 start.go:83] releasing machines lock for "newest-cni-384000", held for 2.320948833s
	W0818 12:37:50.020940    5623 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:50.034970    5623 out.go:177] * Deleting "newest-cni-384000" in qemu2 ...
	W0818 12:37:50.063602    5623 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:50.063624    5623 start.go:729] Will try again in 5 seconds ...
	I0818 12:37:55.064854    5623 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:55.065252    5623 start.go:364] duration metric: took 266.25µs to acquireMachinesLock for "newest-cni-384000"
	I0818 12:37:55.065340    5623 start.go:93] Provisioning new machine with config: &{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:55.065633    5623 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:37:55.071312    5623 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:37:55.119403    5623 start.go:159] libmachine.API.Create for "newest-cni-384000" (driver="qemu2")
	I0818 12:37:55.119465    5623 client.go:168] LocalClient.Create starting
	I0818 12:37:55.119560    5623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:37:55.119608    5623 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:55.119621    5623 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:55.119680    5623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:37:55.119722    5623 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:55.119733    5623 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:55.120243    5623 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:37:55.289271    5623 main.go:141] libmachine: Creating SSH key...
	I0818 12:37:55.581110    5623 main.go:141] libmachine: Creating Disk image...
	I0818 12:37:55.581125    5623 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:37:55.581330    5623 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:37:55.590741    5623 main.go:141] libmachine: STDOUT: 
	I0818 12:37:55.590773    5623 main.go:141] libmachine: STDERR: 
	I0818 12:37:55.590837    5623 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2 +20000M
	I0818 12:37:55.598932    5623 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:37:55.598951    5623 main.go:141] libmachine: STDERR: 
	I0818 12:37:55.598969    5623 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:37:55.598978    5623 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:37:55.598988    5623 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:55.599022    5623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f7:7d:f3:81:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:37:55.600592    5623 main.go:141] libmachine: STDOUT: 
	I0818 12:37:55.600617    5623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:55.600637    5623 client.go:171] duration metric: took 481.1725ms to LocalClient.Create
	I0818 12:37:57.602040    5623 start.go:128] duration metric: took 2.536414417s to createHost
	I0818 12:37:57.602128    5623 start.go:83] releasing machines lock for "newest-cni-384000", held for 2.53687625s
	W0818 12:37:57.602503    5623 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:57.616011    5623 out.go:201] 
	W0818 12:37:57.620170    5623 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:57.620196    5623 out.go:270] * 
	* 
	W0818 12:37:57.623015    5623 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:57.629940    5623 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (63.917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-494000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-494000 create -f testdata/busybox.yaml: exit status 1 (30.200875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-494000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-494000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (28.611375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (28.982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-494000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-494000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-494000 describe deploy/metrics-server -n kube-system: exit status 1 (27.273041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-494000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-494000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (28.574209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-494000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-494000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.584705667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-494000" primary control-plane node in "default-k8s-diff-port-494000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-494000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-494000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:57.135166    5680 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:57.135306    5680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:57.135309    5680 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:57.135311    5680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:57.135454    5680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:37:57.136431    5680 out.go:352] Setting JSON to false
	I0818 12:37:57.152801    5680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4047,"bootTime":1724005830,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:37:57.152869    5680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:57.157962    5680 out.go:177] * [default-k8s-diff-port-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:37:57.164970    5680 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:57.165024    5680 notify.go:220] Checking for updates...
	I0818 12:37:57.171956    5680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:37:57.175007    5680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:37:57.177928    5680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:57.181002    5680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:37:57.183931    5680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:57.187198    5680 config.go:182] Loaded profile config "default-k8s-diff-port-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:57.187467    5680 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:57.191989    5680 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:37:57.198959    5680 start.go:297] selected driver: qemu2
	I0818 12:37:57.198972    5680 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:57.199042    5680 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:57.201296    5680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:37:57.201343    5680 cni.go:84] Creating CNI manager for ""
	I0818 12:37:57.201356    5680 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:37:57.201375    5680 start.go:340] cluster config:
	{Name:default-k8s-diff-port-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-494000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:57.204900    5680 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:57.212886    5680 out.go:177] * Starting "default-k8s-diff-port-494000" primary control-plane node in "default-k8s-diff-port-494000" cluster
	I0818 12:37:57.217056    5680 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:37:57.217072    5680 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:37:57.217082    5680 cache.go:56] Caching tarball of preloaded images
	I0818 12:37:57.217140    5680 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:37:57.217146    5680 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:37:57.217212    5680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/default-k8s-diff-port-494000/config.json ...
	I0818 12:37:57.217650    5680 start.go:360] acquireMachinesLock for default-k8s-diff-port-494000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:57.602250    5680 start.go:364] duration metric: took 384.584833ms to acquireMachinesLock for "default-k8s-diff-port-494000"
	I0818 12:37:57.602449    5680 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:37:57.602491    5680 fix.go:54] fixHost starting: 
	I0818 12:37:57.603282    5680 fix.go:112] recreateIfNeeded on default-k8s-diff-port-494000: state=Stopped err=<nil>
	W0818 12:37:57.603343    5680 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:37:57.615984    5680 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-494000" ...
	I0818 12:37:57.620139    5680 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:37:57.620353    5680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e3:4c:4d:1f:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:37:57.630154    5680 main.go:141] libmachine: STDOUT: 
	I0818 12:37:57.630232    5680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:37:57.630360    5680 fix.go:56] duration metric: took 27.874458ms for fixHost
	I0818 12:37:57.630379    5680 start.go:83] releasing machines lock for "default-k8s-diff-port-494000", held for 28.097291ms
	W0818 12:37:57.630405    5680 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:37:57.630554    5680 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:37:57.630568    5680 start.go:729] Will try again in 5 seconds ...
	I0818 12:38:02.632744    5680 start.go:360] acquireMachinesLock for default-k8s-diff-port-494000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:02.633159    5680 start.go:364] duration metric: took 308.667µs to acquireMachinesLock for "default-k8s-diff-port-494000"
	I0818 12:38:02.633300    5680 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:38:02.633317    5680 fix.go:54] fixHost starting: 
	I0818 12:38:02.633995    5680 fix.go:112] recreateIfNeeded on default-k8s-diff-port-494000: state=Stopped err=<nil>
	W0818 12:38:02.634029    5680 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:38:02.643554    5680 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-494000" ...
	I0818 12:38:02.647697    5680 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:38:02.647886    5680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e3:4c:4d:1f:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/default-k8s-diff-port-494000/disk.qcow2
	I0818 12:38:02.657111    5680 main.go:141] libmachine: STDOUT: 
	I0818 12:38:02.657193    5680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:38:02.657292    5680 fix.go:56] duration metric: took 23.974083ms for fixHost
	I0818 12:38:02.657313    5680 start.go:83] releasing machines lock for "default-k8s-diff-port-494000", held for 24.130792ms
	W0818 12:38:02.657517    5680 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-494000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-494000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:02.663681    5680 out.go:201] 
	W0818 12:38:02.667734    5680 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:38:02.667767    5680 out.go:270] * 
	* 
	W0818 12:38:02.670518    5680 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:38:02.677749    5680 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-494000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (66.187375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.190585041s)

                                                
                                                
-- stdout --
	* [newest-cni-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	* Restarting existing qemu2 VM for "newest-cni-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:38:01.669175    5715 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:38:01.669311    5715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:01.669314    5715 out.go:358] Setting ErrFile to fd 2...
	I0818 12:38:01.669317    5715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:01.669452    5715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:38:01.670461    5715 out.go:352] Setting JSON to false
	I0818 12:38:01.686582    5715 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4051,"bootTime":1724005830,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:38:01.686645    5715 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:38:01.690687    5715 out.go:177] * [newest-cni-384000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:38:01.697563    5715 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:38:01.697635    5715 notify.go:220] Checking for updates...
	I0818 12:38:01.704566    5715 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:38:01.707552    5715 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:38:01.710598    5715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:38:01.713601    5715 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:38:01.716594    5715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:38:01.719899    5715 config.go:182] Loaded profile config "newest-cni-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:01.720184    5715 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:38:01.724547    5715 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:38:01.731579    5715 start.go:297] selected driver: qemu2
	I0818 12:38:01.731585    5715 start.go:901] validating driver "qemu2" against &{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:38:01.731627    5715 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:38:01.733906    5715 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0818 12:38:01.733951    5715 cni.go:84] Creating CNI manager for ""
	I0818 12:38:01.733958    5715 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:38:01.733975    5715 start.go:340] cluster config:
	{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-384000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:38:01.737287    5715 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:38:01.744533    5715 out.go:177] * Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	I0818 12:38:01.747518    5715 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:38:01.747545    5715 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:38:01.747556    5715 cache.go:56] Caching tarball of preloaded images
	I0818 12:38:01.747619    5715 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:38:01.747625    5715 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:38:01.747712    5715 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/newest-cni-384000/config.json ...
	I0818 12:38:01.748130    5715 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:01.748163    5715 start.go:364] duration metric: took 22.292µs to acquireMachinesLock for "newest-cni-384000"
	I0818 12:38:01.748172    5715 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:38:01.748177    5715 fix.go:54] fixHost starting: 
	I0818 12:38:01.748302    5715 fix.go:112] recreateIfNeeded on newest-cni-384000: state=Stopped err=<nil>
	W0818 12:38:01.748312    5715 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:38:01.752603    5715 out.go:177] * Restarting existing qemu2 VM for "newest-cni-384000" ...
	I0818 12:38:01.759552    5715 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:38:01.759590    5715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f7:7d:f3:81:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:38:01.761615    5715 main.go:141] libmachine: STDOUT: 
	I0818 12:38:01.761644    5715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:38:01.761674    5715 fix.go:56] duration metric: took 13.4975ms for fixHost
	I0818 12:38:01.761677    5715 start.go:83] releasing machines lock for "newest-cni-384000", held for 13.510583ms
	W0818 12:38:01.761684    5715 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:38:01.761708    5715 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:01.761712    5715 start.go:729] Will try again in 5 seconds ...
	I0818 12:38:06.763880    5715 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:06.764350    5715 start.go:364] duration metric: took 359.084µs to acquireMachinesLock for "newest-cni-384000"
	I0818 12:38:06.764467    5715 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:38:06.764514    5715 fix.go:54] fixHost starting: 
	I0818 12:38:06.765298    5715 fix.go:112] recreateIfNeeded on newest-cni-384000: state=Stopped err=<nil>
	W0818 12:38:06.765328    5715 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:38:06.782021    5715 out.go:177] * Restarting existing qemu2 VM for "newest-cni-384000" ...
	I0818 12:38:06.785715    5715 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:38:06.785920    5715 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f7:7d:f3:81:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/newest-cni-384000/disk.qcow2
	I0818 12:38:06.795368    5715 main.go:141] libmachine: STDOUT: 
	I0818 12:38:06.795431    5715 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:38:06.795521    5715 fix.go:56] duration metric: took 31.0335ms for fixHost
	I0818 12:38:06.795538    5715 start.go:83] releasing machines lock for "newest-cni-384000", held for 31.165417ms
	W0818 12:38:06.795747    5715 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:06.803773    5715 out.go:201] 
	W0818 12:38:06.806798    5715 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:38:06.806832    5715 out.go:270] * 
	* 
	W0818 12:38:06.809322    5715 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:38:06.818728    5715 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (67.62ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-494000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (31.481375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-494000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-494000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-494000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.836625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-494000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-494000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (29.659875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-494000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (29.323833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-494000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-494000 --alsologtostderr -v=1: exit status 83 (40.178625ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-494000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-494000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:38:02.944783    5734 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:38:02.944935    5734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:02.944938    5734 out.go:358] Setting ErrFile to fd 2...
	I0818 12:38:02.944940    5734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:02.945069    5734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:38:02.945295    5734 out.go:352] Setting JSON to false
	I0818 12:38:02.945303    5734 mustload.go:65] Loading cluster: default-k8s-diff-port-494000
	I0818 12:38:02.945486    5734 config.go:182] Loaded profile config "default-k8s-diff-port-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:02.948725    5734 out.go:177] * The control-plane node default-k8s-diff-port-494000 host is not running: state=Stopped
	I0818 12:38:02.952565    5734 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-494000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-494000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (28.602333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (29.075875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-494000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
E0818 12:38:06.647100    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.88125075s)

                                                
                                                
-- stdout --
	* [embed-certs-470000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-470000" primary control-plane node in "embed-certs-470000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-470000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:38:03.376717    5758 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:38:03.376852    5758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:03.376855    5758 out.go:358] Setting ErrFile to fd 2...
	I0818 12:38:03.376857    5758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:03.377005    5758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:38:03.378081    5758 out.go:352] Setting JSON to false
	I0818 12:38:03.394280    5758 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4053,"bootTime":1724005830,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:38:03.394356    5758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:38:03.399441    5758 out.go:177] * [embed-certs-470000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:38:03.406668    5758 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:38:03.406738    5758 notify.go:220] Checking for updates...
	I0818 12:38:03.414522    5758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:38:03.418614    5758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:38:03.421543    5758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:38:03.424588    5758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:38:03.427588    5758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:38:03.430959    5758 config.go:182] Loaded profile config "multinode-571000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:03.431037    5758 config.go:182] Loaded profile config "newest-cni-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:03.431099    5758 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:38:03.435571    5758 out.go:177] * Using the qemu2 driver based on user configuration
	I0818 12:38:03.444560    5758 start.go:297] selected driver: qemu2
	I0818 12:38:03.444567    5758 start.go:901] validating driver "qemu2" against <nil>
	I0818 12:38:03.444575    5758 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:38:03.446884    5758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:38:03.449546    5758 out.go:177] * Automatically selected the socket_vmnet network
	I0818 12:38:03.452649    5758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:38:03.452681    5758 cni.go:84] Creating CNI manager for ""
	I0818 12:38:03.452689    5758 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:38:03.452694    5758 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:38:03.452719    5758 start.go:340] cluster config:
	{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:38:03.456465    5758 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:38:03.463557    5758 out.go:177] * Starting "embed-certs-470000" primary control-plane node in "embed-certs-470000" cluster
	I0818 12:38:03.467507    5758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:38:03.467525    5758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:38:03.467536    5758 cache.go:56] Caching tarball of preloaded images
	I0818 12:38:03.467601    5758 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:38:03.467607    5758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:38:03.467667    5758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/embed-certs-470000/config.json ...
	I0818 12:38:03.467679    5758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/embed-certs-470000/config.json: {Name:mk611ee462b195b6162c117052f3aa1e09bed317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:38:03.468048    5758 start.go:360] acquireMachinesLock for embed-certs-470000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:03.468084    5758 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "embed-certs-470000"
	I0818 12:38:03.468097    5758 start.go:93] Provisioning new machine with config: &{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:38:03.468131    5758 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:38:03.475595    5758 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:38:03.494269    5758 start.go:159] libmachine.API.Create for "embed-certs-470000" (driver="qemu2")
	I0818 12:38:03.494294    5758 client.go:168] LocalClient.Create starting
	I0818 12:38:03.494364    5758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:38:03.494396    5758 main.go:141] libmachine: Decoding PEM data...
	I0818 12:38:03.494406    5758 main.go:141] libmachine: Parsing certificate...
	I0818 12:38:03.494441    5758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:38:03.494465    5758 main.go:141] libmachine: Decoding PEM data...
	I0818 12:38:03.494478    5758 main.go:141] libmachine: Parsing certificate...
	I0818 12:38:03.494844    5758 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:38:03.648372    5758 main.go:141] libmachine: Creating SSH key...
	I0818 12:38:03.748338    5758 main.go:141] libmachine: Creating Disk image...
	I0818 12:38:03.748343    5758 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:38:03.748507    5758 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:03.758057    5758 main.go:141] libmachine: STDOUT: 
	I0818 12:38:03.758077    5758 main.go:141] libmachine: STDERR: 
	I0818 12:38:03.758129    5758 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2 +20000M
	I0818 12:38:03.766331    5758 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:38:03.766350    5758 main.go:141] libmachine: STDERR: 
	I0818 12:38:03.766363    5758 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:03.766366    5758 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:38:03.766382    5758 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:38:03.766415    5758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:99:da:f7:01:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:03.768070    5758 main.go:141] libmachine: STDOUT: 
	I0818 12:38:03.768092    5758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:38:03.768113    5758 client.go:171] duration metric: took 273.818917ms to LocalClient.Create
	I0818 12:38:05.770260    5758 start.go:128] duration metric: took 2.302139792s to createHost
	I0818 12:38:05.770333    5758 start.go:83] releasing machines lock for "embed-certs-470000", held for 2.302271708s
	W0818 12:38:05.770384    5758 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:05.777679    5758 out.go:177] * Deleting "embed-certs-470000" in qemu2 ...
	W0818 12:38:05.814455    5758 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:05.814490    5758 start.go:729] Will try again in 5 seconds ...
	I0818 12:38:10.816770    5758 start.go:360] acquireMachinesLock for embed-certs-470000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:10.817263    5758 start.go:364] duration metric: took 351.459µs to acquireMachinesLock for "embed-certs-470000"
	I0818 12:38:10.817434    5758 start.go:93] Provisioning new machine with config: &{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:38:10.817686    5758 start.go:125] createHost starting for "" (driver="qemu2")
	I0818 12:38:10.823374    5758 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 12:38:10.876631    5758 start.go:159] libmachine.API.Create for "embed-certs-470000" (driver="qemu2")
	I0818 12:38:10.876699    5758 client.go:168] LocalClient.Create starting
	I0818 12:38:10.876871    5758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/ca.pem
	I0818 12:38:10.876949    5758 main.go:141] libmachine: Decoding PEM data...
	I0818 12:38:10.876964    5758 main.go:141] libmachine: Parsing certificate...
	I0818 12:38:10.877034    5758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-984/.minikube/certs/cert.pem
	I0818 12:38:10.877082    5758 main.go:141] libmachine: Decoding PEM data...
	I0818 12:38:10.877096    5758 main.go:141] libmachine: Parsing certificate...
	I0818 12:38:10.877610    5758 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-984/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0818 12:38:11.039226    5758 main.go:141] libmachine: Creating SSH key...
	I0818 12:38:11.166904    5758 main.go:141] libmachine: Creating Disk image...
	I0818 12:38:11.166910    5758 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0818 12:38:11.167086    5758 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:11.176478    5758 main.go:141] libmachine: STDOUT: 
	I0818 12:38:11.176498    5758 main.go:141] libmachine: STDERR: 
	I0818 12:38:11.176550    5758 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2 +20000M
	I0818 12:38:11.184441    5758 main.go:141] libmachine: STDOUT: Image resized.
	
	I0818 12:38:11.184457    5758 main.go:141] libmachine: STDERR: 
	I0818 12:38:11.184466    5758 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:11.184470    5758 main.go:141] libmachine: Starting QEMU VM...
	I0818 12:38:11.184485    5758 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:38:11.184509    5758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:23:e4:de:bc:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:11.186151    5758 main.go:141] libmachine: STDOUT: 
	I0818 12:38:11.186166    5758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:38:11.186177    5758 client.go:171] duration metric: took 309.461333ms to LocalClient.Create
	I0818 12:38:13.188325    5758 start.go:128] duration metric: took 2.370645375s to createHost
	I0818 12:38:13.188387    5758 start.go:83] releasing machines lock for "embed-certs-470000", held for 2.371131333s
	W0818 12:38:13.188728    5758 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:13.202494    5758 out.go:201] 
	W0818 12:38:13.206629    5758 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:38:13.206655    5758 out.go:270] * 
	* 
	W0818 12:38:13.209092    5758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:38:13.217521    5758 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (70.9145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-384000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (29.112292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-384000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-384000 --alsologtostderr -v=1: exit status 83 (41.633125ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-384000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:38:07.001424    5775 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:38:07.001560    5775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:07.001563    5775 out.go:358] Setting ErrFile to fd 2...
	I0818 12:38:07.001565    5775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:07.001684    5775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:38:07.001897    5775 out.go:352] Setting JSON to false
	I0818 12:38:07.001905    5775 mustload.go:65] Loading cluster: newest-cni-384000
	I0818 12:38:07.002106    5775 config.go:182] Loaded profile config "newest-cni-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:07.006805    5775 out.go:177] * The control-plane node newest-cni-384000 host is not running: state=Stopped
	I0818 12:38:07.010743    5775 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-384000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-384000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (29.224125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (29.23075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-470000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-470000 create -f testdata/busybox.yaml: exit status 1 (29.280125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-470000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-470000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (30.837209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (30.329791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-470000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-470000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-470000 describe deploy/metrics-server -n kube-system: exit status 1 (26.828542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-470000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-470000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (30.260667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.182544167s)

                                                
                                                
-- stdout --
	* [embed-certs-470000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-470000" primary control-plane node in "embed-certs-470000" cluster
	* Restarting existing qemu2 VM for "embed-certs-470000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-470000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:38:17.464193    5833 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:38:17.464316    5833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:17.464319    5833 out.go:358] Setting ErrFile to fd 2...
	I0818 12:38:17.464322    5833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:17.464460    5833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:38:17.465425    5833 out.go:352] Setting JSON to false
	I0818 12:38:17.481664    5833 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4067,"bootTime":1724005830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 12:38:17.481730    5833 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:38:17.486123    5833 out.go:177] * [embed-certs-470000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 12:38:17.492053    5833 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:38:17.492103    5833 notify.go:220] Checking for updates...
	I0818 12:38:17.500097    5833 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 12:38:17.503059    5833 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 12:38:17.506094    5833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:38:17.509112    5833 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 12:38:17.512042    5833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:38:17.515409    5833 config.go:182] Loaded profile config "embed-certs-470000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:17.515675    5833 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:38:17.520057    5833 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 12:38:17.527073    5833 start.go:297] selected driver: qemu2
	I0818 12:38:17.527082    5833 start.go:901] validating driver "qemu2" against &{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-470000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:38:17.527163    5833 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:38:17.529575    5833 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:38:17.529626    5833 cni.go:84] Creating CNI manager for ""
	I0818 12:38:17.529634    5833 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:38:17.529671    5833 start.go:340] cluster config:
	{Name:embed-certs-470000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-470000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:38:17.533254    5833 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:38:17.540985    5833 out.go:177] * Starting "embed-certs-470000" primary control-plane node in "embed-certs-470000" cluster
	I0818 12:38:17.545062    5833 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:38:17.545076    5833 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 12:38:17.545084    5833 cache.go:56] Caching tarball of preloaded images
	I0818 12:38:17.545140    5833 preload.go:172] Found /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 12:38:17.545145    5833 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:38:17.545202    5833 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/embed-certs-470000/config.json ...
	I0818 12:38:17.545650    5833 start.go:360] acquireMachinesLock for embed-certs-470000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:17.545680    5833 start.go:364] duration metric: took 24.541µs to acquireMachinesLock for "embed-certs-470000"
	I0818 12:38:17.545690    5833 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:38:17.545695    5833 fix.go:54] fixHost starting: 
	I0818 12:38:17.545816    5833 fix.go:112] recreateIfNeeded on embed-certs-470000: state=Stopped err=<nil>
	W0818 12:38:17.545824    5833 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:38:17.554041    5833 out.go:177] * Restarting existing qemu2 VM for "embed-certs-470000" ...
	I0818 12:38:17.558112    5833 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:38:17.558159    5833 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:23:e4:de:bc:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:17.560296    5833 main.go:141] libmachine: STDOUT: 
	I0818 12:38:17.560317    5833 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:38:17.560347    5833 fix.go:56] duration metric: took 14.65275ms for fixHost
	I0818 12:38:17.560352    5833 start.go:83] releasing machines lock for "embed-certs-470000", held for 14.667ms
	W0818 12:38:17.560359    5833 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:38:17.560396    5833 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:17.560401    5833 start.go:729] Will try again in 5 seconds ...
	I0818 12:38:22.562496    5833 start.go:360] acquireMachinesLock for embed-certs-470000: {Name:mk4f73a65d48458ae67402e0fb4f68d6d5e62d65 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:22.562996    5833 start.go:364] duration metric: took 391.333µs to acquireMachinesLock for "embed-certs-470000"
	I0818 12:38:22.563187    5833 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:38:22.563204    5833 fix.go:54] fixHost starting: 
	I0818 12:38:22.563969    5833 fix.go:112] recreateIfNeeded on embed-certs-470000: state=Stopped err=<nil>
	W0818 12:38:22.563993    5833 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:38:22.568647    5833 out.go:177] * Restarting existing qemu2 VM for "embed-certs-470000" ...
	I0818 12:38:22.576315    5833 qemu.go:418] Using hvf for hardware acceleration
	I0818 12:38:22.576588    5833 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:23:e4:de:bc:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-984/.minikube/machines/embed-certs-470000/disk.qcow2
	I0818 12:38:22.585679    5833 main.go:141] libmachine: STDOUT: 
	I0818 12:38:22.585748    5833 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0818 12:38:22.585811    5833 fix.go:56] duration metric: took 22.605458ms for fixHost
	I0818 12:38:22.585828    5833 start.go:83] releasing machines lock for "embed-certs-470000", held for 22.755042ms
	W0818 12:38:22.585975    5833 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-470000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0818 12:38:22.592279    5833 out.go:201] 
	W0818 12:38:22.595429    5833 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0818 12:38:22.595547    5833 out.go:270] * 
	* 
	W0818 12:38:22.598241    5833 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:38:22.605388    5833 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-470000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (68.186542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-470000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (32.889792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-470000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-470000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-470000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.396333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-470000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-470000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (29.964083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-470000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (29.164125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-470000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-470000 --alsologtostderr -v=1: exit status 83 (40.294209ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-470000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-470000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:38:22.875724    5852 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:38:22.875897    5852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:22.875903    5852 out.go:358] Setting ErrFile to fd 2...
	I0818 12:38:22.875906    5852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:22.876041    5852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 12:38:22.876256    5852 out.go:352] Setting JSON to false
	I0818 12:38:22.876264    5852 mustload.go:65] Loading cluster: embed-certs-470000
	I0818 12:38:22.876441    5852 config.go:182] Loaded profile config "embed-certs-470000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:22.879531    5852 out.go:177] * The control-plane node embed-certs-470000 host is not running: state=Stopped
	I0818 12:38:22.883384    5852 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-470000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-470000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (29.899834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (30.361833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-470000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    

Test pass (155/270)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 15.15
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.1
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.44
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 198.77
29 TestAddons/serial/Volcano 38.22
31 TestAddons/serial/GCPAuth/Namespaces 0.09
33 TestAddons/parallel/Registry 14.66
34 TestAddons/parallel/Ingress 20
35 TestAddons/parallel/InspektorGadget 10.24
36 TestAddons/parallel/MetricsServer 5.26
39 TestAddons/parallel/CSI 45.64
40 TestAddons/parallel/Headlamp 16.6
41 TestAddons/parallel/CloudSpanner 5.2
42 TestAddons/parallel/LocalPath 41.12
43 TestAddons/parallel/NvidiaDevicePlugin 5.15
44 TestAddons/parallel/Yakd 10.24
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 10.83
56 TestErrorSpam/setup 34.12
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.69
60 TestErrorSpam/unpause 0.59
61 TestErrorSpam/stop 64.3
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 76.92
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.3
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.76
73 TestFunctional/serial/CacheCmd/cache/add_local 1.15
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.67
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.71
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 33.28
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.66
84 TestFunctional/serial/LogsFileCmd 0.64
85 TestFunctional/serial/InvalidService 3.91
87 TestFunctional/parallel/ConfigCmd 0.24
88 TestFunctional/parallel/DashboardCmd 6.66
89 TestFunctional/parallel/DryRun 0.22
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.26
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 23.51
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.43
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.41
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
111 TestFunctional/parallel/License 0.37
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.15
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.99
119 TestFunctional/parallel/ImageCommands/Setup 1.75
120 TestFunctional/parallel/DockerEnv/bash 0.31
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.24
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.59
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
137 TestFunctional/parallel/ServiceCmd/List 0.12
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.12
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 5.39
152 TestFunctional/parallel/MountCmd/specific-port 1.2
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 183.03
161 TestMultiControlPlane/serial/DeployApp 4.69
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 54.74
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.18
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.11
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 3.27
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
208 TestMainNoArgs 0.03
255 TestStoppedBinaryUpgrade/Setup 1.12
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
272 TestNoKubernetes/serial/ProfileList 31.29
273 TestNoKubernetes/serial/Stop 3.51
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
290 TestStartStop/group/old-k8s-version/serial/Stop 2.08
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
301 TestStartStop/group/no-preload/serial/Stop 1.87
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.98
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.75
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/embed-certs/serial/Stop 3.8
335 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-039000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-039000: exit status 85 (95.416333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-039000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |          |
	|         | -p download-only-039000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 11:37:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 11:37:16.360729    1461 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:37:16.360890    1461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:16.360893    1461 out.go:358] Setting ErrFile to fd 2...
	I0818 11:37:16.360895    1461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:16.361013    1461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	W0818 11:37:16.361109    1461 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-984/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-984/.minikube/config/config.json: no such file or directory
	I0818 11:37:16.362456    1461 out.go:352] Setting JSON to true
	I0818 11:37:16.381073    1461 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":406,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 11:37:16.381134    1461 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:37:16.386593    1461 out.go:97] [download-only-039000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 11:37:16.386701    1461 notify.go:220] Checking for updates...
	W0818 11:37:16.386729    1461 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball: no such file or directory
	I0818 11:37:16.390648    1461 out.go:169] MINIKUBE_LOCATION=19423
	I0818 11:37:16.392358    1461 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 11:37:16.395732    1461 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 11:37:16.398738    1461 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:37:16.400287    1461 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	W0818 11:37:16.407580    1461 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 11:37:16.407777    1461 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:37:16.410699    1461 out.go:97] Using the qemu2 driver based on user configuration
	I0818 11:37:16.410716    1461 start.go:297] selected driver: qemu2
	I0818 11:37:16.410729    1461 start.go:901] validating driver "qemu2" against <nil>
	I0818 11:37:16.410806    1461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 11:37:16.414572    1461 out.go:169] Automatically selected the socket_vmnet network
	I0818 11:37:16.420499    1461 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0818 11:37:16.420709    1461 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 11:37:16.420797    1461 cni.go:84] Creating CNI manager for ""
	I0818 11:37:16.420815    1461 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0818 11:37:16.420871    1461 start.go:340] cluster config:
	{Name:download-only-039000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-039000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:37:16.426831    1461 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:37:16.430542    1461 out.go:97] Downloading VM boot image ...
	I0818 11:37:16.430569    1461 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0818 11:37:26.571543    1461 out.go:97] Starting "download-only-039000" primary control-plane node in "download-only-039000" cluster
	I0818 11:37:26.571566    1461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:26.633804    1461 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0818 11:37:26.633830    1461 cache.go:56] Caching tarball of preloaded images
	I0818 11:37:26.634014    1461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:26.638391    1461 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0818 11:37:26.638398    1461 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:26.727060    1461 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0818 11:37:32.085530    1461 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:32.085694    1461 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:32.780380    1461 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0818 11:37:32.780563    1461 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/download-only-039000/config.json ...
	I0818 11:37:32.780579    1461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/download-only-039000/config.json: {Name:mk9442a1cb9f1b069c8e1d28f86c1f8bb56f7572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:37:32.780823    1461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:32.781073    1461 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0818 11:37:33.348962    1461 out.go:193] 
	W0818 11:37:33.355999    1461 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0 0x1069839c0] Decompressors:map[bz2:0x14000590d70 gz:0x14000590d78 tar:0x14000590cf0 tar.bz2:0x14000590d00 tar.gz:0x14000590d40 tar.xz:0x14000590d50 tar.zst:0x14000590d60 tbz2:0x14000590d00 tgz:0x14000590d40 txz:0x14000590d50 tzst:0x14000590d60 xz:0x14000590d80 zip:0x14000590dc0 zst:0x14000590d88] Getters:map[file:0x14000201d40 http:0x140001221e0 https:0x14000122230] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0818 11:37:33.356029    1461 out_reason.go:110] 
	W0818 11:37:33.363896    1461 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 11:37:33.366864    1461 out.go:193] 
	
	
	* The control-plane node download-only-039000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-039000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-039000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (15.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-587000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-587000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (15.152745292s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (15.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-587000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-587000: exit status 85 (96.995792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-039000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |                     |
	|         | -p download-only-039000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT | 18 Aug 24 11:37 PDT |
	| delete  | -p download-only-039000        | download-only-039000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT | 18 Aug 24 11:37 PDT |
	| start   | -o=json --download-only        | download-only-587000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |                     |
	|         | -p download-only-587000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 11:37:33
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 11:37:33.775061    1485 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:37:33.775176    1485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:33.775180    1485 out.go:358] Setting ErrFile to fd 2...
	I0818 11:37:33.775182    1485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:33.775294    1485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 11:37:33.776320    1485 out.go:352] Setting JSON to true
	I0818 11:37:33.794282    1485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":423,"bootTime":1724005830,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 11:37:33.794341    1485 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:37:33.798862    1485 out.go:97] [download-only-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 11:37:33.798941    1485 notify.go:220] Checking for updates...
	I0818 11:37:33.802900    1485 out.go:169] MINIKUBE_LOCATION=19423
	I0818 11:37:33.805861    1485 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 11:37:33.809877    1485 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 11:37:33.812879    1485 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:37:33.815864    1485 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	W0818 11:37:33.821856    1485 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 11:37:33.822002    1485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:37:33.823383    1485 out.go:97] Using the qemu2 driver based on user configuration
	I0818 11:37:33.823390    1485 start.go:297] selected driver: qemu2
	I0818 11:37:33.823393    1485 start.go:901] validating driver "qemu2" against <nil>
	I0818 11:37:33.823435    1485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 11:37:33.826804    1485 out.go:169] Automatically selected the socket_vmnet network
	I0818 11:37:33.832111    1485 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0818 11:37:33.832203    1485 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 11:37:33.832241    1485 cni.go:84] Creating CNI manager for ""
	I0818 11:37:33.832249    1485 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 11:37:33.832256    1485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 11:37:33.832304    1485 start.go:340] cluster config:
	{Name:download-only-587000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:37:33.835744    1485 iso.go:125] acquiring lock: {Name:mk9ea6eb0bd466d6d0c8de2848a963ed28ee9cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:37:33.838899    1485 out.go:97] Starting "download-only-587000" primary control-plane node in "download-only-587000" cluster
	I0818 11:37:33.838906    1485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 11:37:33.897949    1485 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 11:37:33.897961    1485 cache.go:56] Caching tarball of preloaded images
	I0818 11:37:33.898132    1485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 11:37:33.902889    1485 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0818 11:37:33.902896    1485 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:33.995890    1485 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0818 11:37:44.124149    1485 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:44.124313    1485 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-984/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0818 11:37:44.646600    1485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 11:37:44.646803    1485 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/download-only-587000/config.json ...
	I0818 11:37:44.646819    1485 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/download-only-587000/config.json: {Name:mkf79ac5be87ea4769534c6e8b0842286877715a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:37:44.647059    1485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 11:37:44.647207    1485 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-984/.minikube/cache/darwin/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-587000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-587000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-587000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.44s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-946000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-946000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-946000
--- PASS: TestBinaryMirror (0.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-711000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-711000: exit status 85 (57.010041ms)

                                                
                                                
-- stdout --
	* Profile "addons-711000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-711000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-711000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-711000: exit status 85 (60.888792ms)

                                                
                                                
-- stdout --
	* Profile "addons-711000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-711000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (198.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-711000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-711000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m18.765641667s)
--- PASS: TestAddons/Setup (198.77s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.22s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.174208ms
addons_test.go:905: volcano-admission stabilized in 7.215541ms
addons_test.go:897: volcano-scheduler stabilized in 7.295666ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-mlnks" [0573c5e2-83b6-4e02-8486-42b683bfd368] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.006974417s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-q8566" [ad375080-947d-4e0b-9d02-21211579806c] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005658833s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-tnzx7" [7678c85c-bb7e-4901-acb3-3b78bcb1caf1] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004820833s
addons_test.go:932: (dbg) Run:  kubectl --context addons-711000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-711000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-711000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b22fcfc5-bf28-4718-85cf-f2ddea5badfd] Pending
helpers_test.go:344: "test-job-nginx-0" [b22fcfc5-bf28-4718-85cf-f2ddea5badfd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b22fcfc5-bf28-4718-85cf-f2ddea5badfd] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005404791s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-711000 addons disable volcano --alsologtostderr -v=1: (9.976644667s)
--- PASS: TestAddons/serial/Volcano (38.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-711000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-711000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.234625ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-m44xk" [a5e6bac0-2528-43b4-8e67-28087d40a06b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009182833s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-47vtl" [78204ffb-de46-4a83-8cac-bd4fe41efb09] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0062375s
addons_test.go:342: (dbg) Run:  kubectl --context addons-711000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-711000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-711000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.317789542s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-711000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-711000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-711000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [72f4db22-2cb6-4f17-b347-5ded063882dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [72f4db22-2cb6-4f17-b347-5ded063882dd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010047625s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-711000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-711000 addons disable ingress-dns --alsologtostderr -v=1: (1.172215292s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-711000 addons disable ingress --alsologtostderr -v=1: (7.2017895s)
--- PASS: TestAddons/parallel/Ingress (20.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n7wcp" [d998fe92-ff55-4188-82d4-d320bceaa152] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005804875s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-711000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-711000: (5.232745292s)
--- PASS: TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.562ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-x66ct" [28ad6fff-40af-4d2b-ac74-5d6cd697e1bb] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004595417s
addons_test.go:417: (dbg) Run:  kubectl --context addons-711000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 60.719666ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-711000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-711000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dd1be238-58da-4bb0-9b61-ef2e8d48b3c3] Pending
helpers_test.go:344: "task-pv-pod" [dd1be238-58da-4bb0-9b61-ef2e8d48b3c3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dd1be238-58da-4bb0-9b61-ef2e8d48b3c3] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005122792s
addons_test.go:590: (dbg) Run:  kubectl --context addons-711000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-711000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-711000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-711000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-711000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-711000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/08/18 11:42:19 [DEBUG] GET http://192.168.105.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-711000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [895d8ab0-ded9-4d23-9435-9143d4b2abd2] Pending
helpers_test.go:344: "task-pv-pod-restore" [895d8ab0-ded9-4d23-9435-9143d4b2abd2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [895d8ab0-ded9-4d23-9435-9143d4b2abd2] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.009484417s
addons_test.go:632: (dbg) Run:  kubectl --context addons-711000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-711000 delete pod task-pv-pod-restore: (1.292814875s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-711000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-711000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-711000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.081228416s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-711000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-dtx7h" [87f2e595-4afc-4167-919f-7f866491fe52] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-dtx7h" [87f2e595-4afc-4167-919f-7f866491fe52] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.0101825s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-711000 addons disable headlamp --alsologtostderr -v=1: (5.265884292s)
--- PASS: TestAddons/parallel/Headlamp (16.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-zhf2d" [194e8281-4689-4264-89cc-005b61e57b8c] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005934583s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-711000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-711000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-711000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-711000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8f08f59e-5418-46b0-93f3-ab6a635b5ba9] Pending
helpers_test.go:344: "test-local-path" [8f08f59e-5418-46b0-93f3-ab6a635b5ba9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8f08f59e-5418-46b0-93f3-ab6a635b5ba9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8f08f59e-5418-46b0-93f3-ab6a635b5ba9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.01113225s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-711000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 ssh "cat /opt/local-path-provisioner/pvc-7a9ebccd-6675-4a5a-b8f8-2cbdbb0709ee_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-711000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-711000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-711000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.583361333s)
--- PASS: TestAddons/parallel/LocalPath (41.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dwfqq" [4d33a829-0e56-4530-bf05-c65af78bc1be] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003146041s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-711000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-vrmx7" [76143d4c-d9fa-4623-9c34-a997dc84caa9] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00411425s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-711000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-711000 addons disable yakd --alsologtostderr -v=1: (5.237857833s)
--- PASS: TestAddons/parallel/Yakd (10.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-711000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-711000: (12.206429291s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-711000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-711000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-711000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.83s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.83s)

                                                
                                    
x
+
TestErrorSpam/setup (34.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-103000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-103000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 --driver=qemu2 : (34.118182542s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (34.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 stop: (12.20723675s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 stop: (26.059400583s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-103000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-103000 stop: (26.029162375s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19423-984/.minikube/files/etc/test/nested/copy/1459/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-685000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0818 11:46:08.677832    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:08.686057    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:08.699421    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:08.722753    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:08.766157    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:08.849591    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:09.013030    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:09.336562    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:09.979988    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:11.263295    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:13.826182    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:18.948685    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:46:29.190869    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-685000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m16.916839167s)
--- PASS: TestFunctional/serial/StartWithProxy (76.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-685000 --alsologtostderr -v=8
E0818 11:46:49.671707    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-685000 --alsologtostderr -v=8: (37.29464325s)
functional_test.go:663: soft start took 37.295161542s for "functional-685000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-685000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-685000 cache add registry.k8s.io/pause:3.1: (1.085840458s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local394274738/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cache add minikube-local-cache-test:functional-685000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cache delete minikube-local-cache-test:functional-685000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-685000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-685000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.390667ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 kubectl -- --context functional-685000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.71s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-685000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-685000 get pods: (1.006260375s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-685000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0818 11:47:30.634217    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-685000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.279187209s)
functional_test.go:761: restart took 33.279288583s for "functional-685000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-685000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd880005274/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-685000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-685000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-685000: exit status 115 (143.709292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31597 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-685000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-685000 config get cpus: exit status 14 (36.257375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-685000 config get cpus: exit status 14 (31.3695ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-685000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-685000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2177: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-685000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-685000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.98275ms)

                                                
                                                
-- stdout --
	* [functional-685000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 11:48:50.543593    2153 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:48:50.543723    2153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:48:50.543728    2153 out.go:358] Setting ErrFile to fd 2...
	I0818 11:48:50.543730    2153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:48:50.543896    2153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 11:48:50.544977    2153 out.go:352] Setting JSON to false
	I0818 11:48:50.562315    2153 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1100,"bootTime":1724005830,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 11:48:50.562405    2153 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:48:50.566523    2153 out.go:177] * [functional-685000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0818 11:48:50.573574    2153 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 11:48:50.573626    2153 notify.go:220] Checking for updates...
	I0818 11:48:50.580551    2153 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 11:48:50.585594    2153 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 11:48:50.588517    2153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:48:50.591640    2153 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 11:48:50.592979    2153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 11:48:50.595805    2153 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:48:50.596101    2153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:48:50.600492    2153 out.go:177] * Using the qemu2 driver based on existing profile
	I0818 11:48:50.605553    2153 start.go:297] selected driver: qemu2
	I0818 11:48:50.605562    2153 start.go:901] validating driver "qemu2" against &{Name:functional-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:48:50.605628    2153 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 11:48:50.612603    2153 out.go:201] 
	W0818 11:48:50.616506    2153 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0818 11:48:50.620549    2153 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-685000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-685000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-685000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.088125ms)

                                                
                                                
-- stdout --
	* [functional-685000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 11:48:50.764082    2166 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:48:50.764198    2166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:48:50.764208    2166 out.go:358] Setting ErrFile to fd 2...
	I0818 11:48:50.764212    2166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:48:50.764351    2166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
	I0818 11:48:50.765727    2166 out.go:352] Setting JSON to false
	I0818 11:48:50.784405    2166 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1100,"bootTime":1724005830,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0818 11:48:50.784487    2166 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:48:50.788532    2166 out.go:177] * [functional-685000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0818 11:48:50.793553    2166 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 11:48:50.793605    2166 notify.go:220] Checking for updates...
	I0818 11:48:50.798662    2166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	I0818 11:48:50.805534    2166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0818 11:48:50.808550    2166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:48:50.811551    2166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	I0818 11:48:50.814552    2166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 11:48:50.818338    2166 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:48:50.818637    2166 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:48:50.822460    2166 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0818 11:48:50.829560    2166 start.go:297] selected driver: qemu2
	I0818 11:48:50.829568    2166 start.go:901] validating driver "qemu2" against &{Name:functional-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:48:50.829618    2166 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 11:48:50.836514    2166 out.go:201] 
	W0818 11:48:50.840548    2166 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0818 11:48:50.844463    2166 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9d7b6e4a-5251-4cfc-be0b-a61ddda11141] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009894375s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-685000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-685000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-685000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-685000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [20c341f5-3245-4826-b11c-cab077b52c4d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [20c341f5-3245-4826-b11c-cab077b52c4d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.009472375s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-685000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-685000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-685000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [916f1bd3-2e5a-4420-b07b-f290dff4bd0e] Pending
helpers_test.go:344: "sp-pod" [916f1bd3-2e5a-4420-b07b-f290dff4bd0e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [916f1bd3-2e5a-4420-b07b-f290dff4bd0e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.01034325s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-685000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh -n functional-685000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cp functional-685000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd314793543/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh -n functional-685000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh -n functional-685000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1459/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo cat /etc/test/nested/copy/1459/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1459.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo cat /etc/ssl/certs/1459.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1459.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo cat /usr/share/ca-certificates/1459.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo cat /etc/ssl/certs/14592.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo cat /usr/share/ca-certificates/14592.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-685000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-685000 ssh "sudo systemctl is-active crio": exit status 1 (62.30325ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-685000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-685000
docker.io/kicbase/echo-server:functional-685000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-685000 image ls --format short --alsologtostderr:
I0818 11:48:52.318203    2192 out.go:345] Setting OutFile to fd 1 ...
I0818 11:48:52.318375    2192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:52.318387    2192 out.go:358] Setting ErrFile to fd 2...
I0818 11:48:52.318390    2192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:52.318534    2192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
I0818 11:48:52.318956    2192 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:52.319022    2192 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:52.319862    2192 ssh_runner.go:195] Run: systemctl --version
I0818 11:48:52.319874    2192 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/functional-685000/id_rsa Username:docker}
I0818 11:48:52.348260    2192 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-685000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-685000 | ce2d2cda2d858 | 4.78MB |
| localhost/my-image                          | functional-685000 | f219e2afcbe20 | 1.41MB |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-685000 | 07b1843b4360b | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-685000 image ls --format table --alsologtostderr:
I0818 11:48:54.528015    2204 out.go:345] Setting OutFile to fd 1 ...
I0818 11:48:54.528161    2204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:54.528165    2204 out.go:358] Setting ErrFile to fd 2...
I0818 11:48:54.528167    2204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:54.528305    2204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
I0818 11:48:54.528760    2204 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:54.528823    2204 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:54.529633    2204 ssh_runner.go:195] Run: systemctl --version
I0818 11:48:54.529642    2204 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/functional-685000/id_rsa Username:docker}
I0818 11:48:54.555711    2204 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/08/18 11:48:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-685000 image ls --format json --alsologtostderr:
[{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"07b1843b4360b65704eba79ce47d39e829353ff92b3aaf1fa5f2876dcdf092ee","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-685000"],"size":"30"},{"id":"fbbbd428abb4dae52ab3018797d
00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"f219e2afcbe2005d5fc953ca5c27c30a05fcfeb72c7136926c99fb988eaddef5","repoDigests":[],"repoTags":["localhost/my-image:functional-685000"],"size":"1410000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registr
y.k8s.io/pause:3.10"],"size":"514000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-685000"],"size":"4780000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-685000 image ls --format json --alsologtostderr:
I0818 11:48:54.452446    2202 out.go:345] Setting OutFile to fd 1 ...
I0818 11:48:54.452616    2202 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:54.452620    2202 out.go:358] Setting ErrFile to fd 2...
I0818 11:48:54.452622    2202 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:54.452747    2202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
I0818 11:48:54.453202    2202 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:54.453264    2202 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:54.454064    2202 ssh_runner.go:195] Run: systemctl --version
I0818 11:48:54.454072    2202 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/functional-685000/id_rsa Username:docker}
I0818 11:48:54.480278    2202 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-685000 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-685000
size: "4780000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 07b1843b4360b65704eba79ce47d39e829353ff92b3aaf1fa5f2876dcdf092ee
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-685000
size: "30"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-685000 image ls --format yaml --alsologtostderr:
I0818 11:48:52.390783    2194 out.go:345] Setting OutFile to fd 1 ...
I0818 11:48:52.390951    2194 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:52.390954    2194 out.go:358] Setting ErrFile to fd 2...
I0818 11:48:52.390957    2194 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:52.391107    2194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
I0818 11:48:52.391616    2194 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:52.391686    2194 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:52.392589    2194 ssh_runner.go:195] Run: systemctl --version
I0818 11:48:52.392602    2194 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/functional-685000/id_rsa Username:docker}
I0818 11:48:52.418883    2194 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-685000 ssh pgrep buildkitd: exit status 1 (61.038416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image build -t localhost/my-image:functional-685000 testdata/build --alsologtostderr
E0818 11:48:52.555198    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-685000 image build -t localhost/my-image:functional-685000 testdata/build --alsologtostderr: (1.857512834s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-685000 image build -t localhost/my-image:functional-685000 testdata/build --alsologtostderr:
I0818 11:48:52.522203    2198 out.go:345] Setting OutFile to fd 1 ...
I0818 11:48:52.522433    2198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:52.522437    2198 out.go:358] Setting ErrFile to fd 2...
I0818 11:48:52.522439    2198 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:48:52.522568    2198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-984/.minikube/bin
I0818 11:48:52.523005    2198 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:52.523865    2198 config.go:182] Loaded profile config "functional-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:48:52.524723    2198 ssh_runner.go:195] Run: systemctl --version
I0818 11:48:52.524733    2198 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-984/.minikube/machines/functional-685000/id_rsa Username:docker}
I0818 11:48:52.551617    2198 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3481490539.tar
I0818 11:48:52.551679    2198 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0818 11:48:52.555026    2198 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3481490539.tar
I0818 11:48:52.556547    2198 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3481490539.tar: stat -c "%s %y" /var/lib/minikube/build/build.3481490539.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3481490539.tar': No such file or directory
I0818 11:48:52.556562    2198 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3481490539.tar --> /var/lib/minikube/build/build.3481490539.tar (3072 bytes)
I0818 11:48:52.564983    2198 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3481490539
I0818 11:48:52.568875    2198 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3481490539 -xf /var/lib/minikube/build/build.3481490539.tar
I0818 11:48:52.572307    2198 docker.go:360] Building image: /var/lib/minikube/build/build.3481490539
I0818 11:48:52.572359    2198 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-685000 /var/lib/minikube/build/build.3481490539
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:f219e2afcbe2005d5fc953ca5c27c30a05fcfeb72c7136926c99fb988eaddef5 done
#8 naming to localhost/my-image:functional-685000 done
#8 DONE 0.0s
I0818 11:48:54.325793    2198 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-685000 /var/lib/minikube/build/build.3481490539: (1.753435916s)
I0818 11:48:54.325865    2198 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3481490539
I0818 11:48:54.329725    2198 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3481490539.tar
I0818 11:48:54.333104    2198 build_images.go:217] Built localhost/my-image:functional-685000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3481490539.tar
I0818 11:48:54.333122    2198 build_images.go:133] succeeded building to: functional-685000
I0818 11:48:54.333124    2198 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.733596584s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-685000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-685000 docker-env) && out/minikube-darwin-arm64 status -p functional-685000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-685000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-685000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-685000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-6hqtl" [f8bc0a3f-c956-409e-8acb-78279b6327c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-6hqtl" [f8bc0a3f-c956-409e-8acb-78279b6327c0] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.011047209s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image load --daemon kicbase/echo-server:functional-685000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image load --daemon kicbase/echo-server:functional-685000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-685000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image load --daemon kicbase/echo-server:functional-685000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image save kicbase/echo-server:functional-685000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image rm kicbase/echo-server:functional-685000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-685000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 image save --daemon kicbase/echo-server:functional-685000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-685000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-685000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-685000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-685000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-685000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2022: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-685000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-685000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c505ce9f-f247-4fb7-b758-6219b3987234] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c505ce9f-f247-4fb7-b758-6219b3987234] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.009759333s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 service list -o json
functional_test.go:1494: Took "85.79625ms" to run "out/minikube-darwin-arm64 -p functional-685000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31715
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31715
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-685000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.217.127 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-685000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "86.018625ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.679125ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "85.416625ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.642375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port122789443/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724006922357724000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port122789443/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724006922357724000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port122789443/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724006922357724000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port122789443/001/test-1724006922357724000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 18 18:48 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 18 18:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 18 18:48 test-1724006922357724000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh cat /mount-9p/test-1724006922357724000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-685000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d748de01-96a9-4c32-8df6-899a9cd0de31] Pending
helpers_test.go:344: "busybox-mount" [d748de01-96a9-4c32-8df6-899a9cd0de31] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d748de01-96a9-4c32-8df6-899a9cd0de31] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d748de01-96a9-4c32-8df6-899a9cd0de31] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.010285417s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-685000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port122789443/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2628426462/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.771458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2628426462/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-685000 ssh "sudo umount -f /mount-9p": exit status 1 (62.576375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-685000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2628426462/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T" /mount1: (1.399740459s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-685000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-685000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-685000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3802270838/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-685000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-685000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-685000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (183.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-108000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0818 11:51:08.669262    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:36.397495    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/addons-711000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-108000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m2.837570666s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (183.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-108000 -- rollout status deployment/busybox: (3.076116s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-2bsld -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-88rbg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-k6rrg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-2bsld -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-88rbg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-k6rrg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-2bsld -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-88rbg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-k6rrg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-2bsld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-2bsld -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-88rbg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-88rbg -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-k6rrg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-108000 -- exec busybox-7dff88458-k6rrg -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-108000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-108000 -v=7 --alsologtostderr: (54.529028125s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-108000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp testdata/cp-test.txt ha-108000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile4221395673/001/cp-test_ha-108000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000:/home/docker/cp-test.txt ha-108000-m02:/home/docker/cp-test_ha-108000_ha-108000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test_ha-108000_ha-108000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000:/home/docker/cp-test.txt ha-108000-m03:/home/docker/cp-test_ha-108000_ha-108000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test_ha-108000_ha-108000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000:/home/docker/cp-test.txt ha-108000-m04:/home/docker/cp-test_ha-108000_ha-108000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test_ha-108000_ha-108000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp testdata/cp-test.txt ha-108000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile4221395673/001/cp-test_ha-108000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m02:/home/docker/cp-test.txt ha-108000:/home/docker/cp-test_ha-108000-m02_ha-108000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test_ha-108000-m02_ha-108000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m02:/home/docker/cp-test.txt ha-108000-m03:/home/docker/cp-test_ha-108000-m02_ha-108000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test_ha-108000-m02_ha-108000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m02:/home/docker/cp-test.txt ha-108000-m04:/home/docker/cp-test_ha-108000-m02_ha-108000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test_ha-108000-m02_ha-108000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp testdata/cp-test.txt ha-108000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile4221395673/001/cp-test_ha-108000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m03:/home/docker/cp-test.txt ha-108000:/home/docker/cp-test_ha-108000-m03_ha-108000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test_ha-108000-m03_ha-108000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m03:/home/docker/cp-test.txt ha-108000-m02:/home/docker/cp-test_ha-108000-m03_ha-108000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test_ha-108000-m03_ha-108000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m03:/home/docker/cp-test.txt ha-108000-m04:/home/docker/cp-test_ha-108000-m03_ha-108000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test_ha-108000-m03_ha-108000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp testdata/cp-test.txt ha-108000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile4221395673/001/cp-test_ha-108000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m04:/home/docker/cp-test.txt ha-108000:/home/docker/cp-test_ha-108000-m04_ha-108000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000 "sudo cat /home/docker/cp-test_ha-108000-m04_ha-108000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m04:/home/docker/cp-test.txt ha-108000-m02:/home/docker/cp-test_ha-108000-m04_ha-108000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m02 "sudo cat /home/docker/cp-test_ha-108000-m04_ha-108000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 cp ha-108000-m04:/home/docker/cp-test.txt ha-108000-m03:/home/docker/cp-test_ha-108000-m04_ha-108000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-108000 ssh -n ha-108000-m03 "sudo cat /home/docker/cp-test_ha-108000-m04_ha-108000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0818 12:08:06.703125    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:09:29.787667    1459 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-984/.minikube/profiles/functional-685000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.10883775s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.27s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-220000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-220000 --output=json --user=testUser: (3.274386917s)
--- PASS: TestJSONOutput/stop/Command (3.27s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-882000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-882000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.913625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0e6dea3-82fe-4aa7-9c93-7cfe99797e05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-882000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f45c19b2-e643-45aa-8582-c642035d2eaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"62c110cb-d579-4958-906d-27518d86394c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig"}}
	{"specversion":"1.0","id":"e42b682b-ed2d-4037-817e-f9f8446124ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d0f1cc47-68c4-4b88-838c-03857218b9fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3831fc83-a1c5-4321-a744-9115f58993a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube"}}
	{"specversion":"1.0","id":"7d86428e-6955-4599-a9be-9dcfbeb96714","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93b877c2-b2cb-4b03-917a-24bd7c0721de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-882000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-882000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-621000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.875459ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-621000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-984/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-984/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-621000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-621000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.59875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-621000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-621000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.647324625s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.64697325s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-621000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-621000: (3.511465417s)
--- PASS: TestNoKubernetes/serial/Stop (3.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-621000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-621000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.982875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-621000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-621000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-521000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-088000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-088000 --alsologtostderr -v=3: (2.080683s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-088000 -n old-k8s-version-088000: exit status 7 (52.343625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-088000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-972000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-972000 --alsologtostderr -v=3: (1.866337334s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-972000 -n no-preload-972000: exit status 7 (56.89025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-972000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-494000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-494000 --alsologtostderr -v=3: (3.982775416s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-494000 -n default-k8s-diff-port-494000: exit status 7 (55.547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-494000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-384000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-384000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-384000 --alsologtostderr -v=3: (3.748524083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (56.454083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-384000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-470000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-470000 --alsologtostderr -v=3: (3.80184225s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-470000 -n embed-certs-470000: exit status 7 (58.255166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-470000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    

Test skip (21/270)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-937000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-937000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-937000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-937000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937000"

                                                
                                                
----------------------- debugLogs end: cilium-937000 [took: 2.190481708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-937000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-937000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-507000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard