Test Report: QEMU_macOS 19349

                    
                      0359be70ee85a493d9f37ccc73e8278336c81275:2024-07-31:35584
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.39
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.13
55 TestCertOptions 10.16
56 TestCertExpiration 196.56
57 TestDockerFlags 12.67
58 TestForceSystemdFlag 10.02
59 TestForceSystemdEnv 10.14
104 TestFunctional/parallel/ServiceCmdConnect 33.6
176 TestMultiControlPlane/serial/StopSecondaryNode 205.09
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 55.22
178 TestMultiControlPlane/serial/RestartSecondaryNode 159.63
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 185.31
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
183 TestMultiControlPlane/serial/StopCluster 153
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 10.1
193 TestJSONOutput/start/Command 9.87
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.09
225 TestMountStart/serial/StartWithMountFirst 9.92
228 TestMultiNode/serial/FreshStart2Nodes 10.01
229 TestMultiNode/serial/DeployApp2Nodes 119.92
230 TestMultiNode/serial/PingHostFrom2Pods 0.09
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.08
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.14
236 TestMultiNode/serial/StartAfterStop 47.81
237 TestMultiNode/serial/RestartKeepsNodes 8.29
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.37
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 19.98
245 TestPreload 10.06
247 TestScheduledStopUnix 10.11
248 TestSkaffold 12.24
251 TestRunningBinaryUpgrade 590.2
253 TestKubernetesUpgrade 19.64
267 TestStoppedBinaryUpgrade/Upgrade 610.22
276 TestPause/serial/Start 9.84
280 TestNoKubernetes/serial/StartWithK8s 9.79
281 TestNoKubernetes/serial/StartWithStopK8s 5.3
282 TestNoKubernetes/serial/Start 5.31
286 TestNoKubernetes/serial/StartNoArgs 7.04
287 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.11
288 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.68
290 TestNetworkPlugins/group/auto/Start 9.76
291 TestNetworkPlugins/group/kindnet/Start 9.88
292 TestNetworkPlugins/group/flannel/Start 9.8
293 TestNetworkPlugins/group/enable-default-cni/Start 9.91
294 TestNetworkPlugins/group/bridge/Start 9.86
295 TestNetworkPlugins/group/kubenet/Start 9.86
296 TestNetworkPlugins/group/custom-flannel/Start 9.92
297 TestNetworkPlugins/group/calico/Start 9.85
298 TestNetworkPlugins/group/false/Start 9.97
300 TestStartStop/group/old-k8s-version/serial/FirstStart 10.08
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 10.14
312 TestStartStop/group/no-preload/serial/DeployApp 0.09
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
316 TestStartStop/group/no-preload/serial/SecondStart 5.25
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/no-preload/serial/Pause 0.1
322 TestStartStop/group/embed-certs/serial/FirstStart 9.86
323 TestStartStop/group/embed-certs/serial/DeployApp 0.09
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
327 TestStartStop/group/embed-certs/serial/SecondStart 5.25
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
331 TestStartStop/group/embed-certs/serial/Pause 0.1
333 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.12
335 TestStartStop/group/newest-cni/serial/FirstStart 12.03
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.38
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.25
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-914000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-914000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.387080917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9e775277-73e4-4482-bf40-711b208638a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-914000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4797213c-d2e9-45bc-b52f-99866338fc3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19349"}}
	{"specversion":"1.0","id":"9fcc714a-452c-4687-8a11-e139dd0151ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig"}}
	{"specversion":"1.0","id":"ca953f6a-7a30-428e-b8b2-2299210c57cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4d1f7272-82c3-4a00-8de4-4ad44cddf996","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7874797-5cbf-4ad1-a561-95ed6c651c8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube"}}
	{"specversion":"1.0","id":"91ad0482-a00c-4111-9270-45152390ff17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"194ce92b-61b5-42e8-b6c1-0e140fc72bef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"813a1b50-cb40-4cbd-ba84-4727de08b94f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"fbee76e8-e7c2-4fd3-b4ea-e4ef0472556c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab73a931-2499-46ba-b6de-d576008720bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-914000\" primary control-plane node in \"download-only-914000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3499a0e7-2e84-4b3a-b009-22a8367b3ea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c65b4913-ea12-4665-b4ff-ed37cff06324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60] Decompressors:map[bz2:0x1400066b310 gz:0x1400066b318 tar:0x1400066b2c0 tar.bz2:0x1400066b2d0 tar.gz:0x1400066b2e0 tar.xz:0x1400066b2f0 tar.zst:0x1400066b300 tbz2:0x1400066b2d0 tgz:0x14
00066b2e0 txz:0x1400066b2f0 tzst:0x1400066b300 xz:0x1400066b320 zip:0x1400066b330 zst:0x1400066b328] Getters:map[file:0x1400134c550 http:0x140004763c0 https:0x14000476410] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"a46bd46f-3e33-4c37-890b-5d1795ddd73c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 09:39:05.145382    1554 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:39:05.145520    1554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:05.145524    1554 out.go:304] Setting ErrFile to fd 2...
	I0731 09:39:05.145526    1554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:05.145660    1554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	W0731 09:39:05.145747    1554 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19349-1049/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19349-1049/.minikube/config/config.json: no such file or directory
	I0731 09:39:05.147055    1554 out.go:298] Setting JSON to true
	I0731 09:39:05.164163    1554 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":509,"bootTime":1722443436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 09:39:05.164223    1554 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 09:39:05.169189    1554 out.go:97] [download-only-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 09:39:05.169350    1554 notify.go:220] Checking for updates...
	W0731 09:39:05.169384    1554 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 09:39:05.170896    1554 out.go:169] MINIKUBE_LOCATION=19349
	I0731 09:39:05.173949    1554 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 09:39:05.178027    1554 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 09:39:05.179455    1554 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 09:39:05.182980    1554 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	W0731 09:39:05.189041    1554 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 09:39:05.189299    1554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 09:39:05.194031    1554 out.go:97] Using the qemu2 driver based on user configuration
	I0731 09:39:05.194050    1554 start.go:297] selected driver: qemu2
	I0731 09:39:05.194065    1554 start.go:901] validating driver "qemu2" against <nil>
	I0731 09:39:05.194146    1554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 09:39:05.196957    1554 out.go:169] Automatically selected the socket_vmnet network
	I0731 09:39:05.202734    1554 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 09:39:05.202822    1554 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 09:39:05.202876    1554 cni.go:84] Creating CNI manager for ""
	I0731 09:39:05.202893    1554 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 09:39:05.202957    1554 start.go:340] cluster config:
	{Name:download-only-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:39:05.208432    1554 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 09:39:05.212974    1554 out.go:97] Downloading VM boot image ...
	I0731 09:39:05.212992    1554 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 09:39:09.736939    1554 out.go:97] Starting "download-only-914000" primary control-plane node in "download-only-914000" cluster
	I0731 09:39:09.736959    1554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 09:39:09.794112    1554 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 09:39:09.794121    1554 cache.go:56] Caching tarball of preloaded images
	I0731 09:39:09.794277    1554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 09:39:09.798820    1554 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 09:39:09.798827    1554 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:09.875268    1554 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 09:39:15.412431    1554 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:15.412580    1554 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:16.107363    1554 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 09:39:16.107626    1554 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-914000/config.json ...
	I0731 09:39:16.107646    1554 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-914000/config.json: {Name:mk5edff6f9ab0ab06bd4a3be203539e8b669aa05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 09:39:16.107897    1554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 09:39:16.108107    1554 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 09:39:16.460037    1554 out.go:169] 
	W0731 09:39:16.467366    1554 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60] Decompressors:map[bz2:0x1400066b310 gz:0x1400066b318 tar:0x1400066b2c0 tar.bz2:0x1400066b2d0 tar.gz:0x1400066b2e0 tar.xz:0x1400066b2f0 tar.zst:0x1400066b300 tbz2:0x1400066b2d0 tgz:0x1400066b2e0 txz:0x1400066b2f0 tzst:0x1400066b300 xz:0x1400066b320 zip:0x1400066b330 zst:0x1400066b328] Getters:map[file:0x1400134c550 http:0x140004763c0 https:0x14000476410] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 09:39:16.467392    1554 out_reason.go:110] 
	W0731 09:39:16.474195    1554 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 09:39:16.478169    1554 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-914000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-290000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-290000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.983932666s)

                                                
                                                
-- stdout --
	* [offline-docker-290000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-290000" primary control-plane node in "offline-docker-290000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-290000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:12:08.097004    3636 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:12:08.097142    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:12:08.097145    3636 out.go:304] Setting ErrFile to fd 2...
	I0731 10:12:08.097148    3636 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:12:08.097273    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:12:08.098447    3636 out.go:298] Setting JSON to false
	I0731 10:12:08.116114    3636 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2492,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:12:08.116185    3636 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:12:08.119104    3636 out.go:177] * [offline-docker-290000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:12:08.127177    3636 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:12:08.127218    3636 notify.go:220] Checking for updates...
	I0731 10:12:08.133173    3636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:12:08.136174    3636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:12:08.139155    3636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:12:08.142119    3636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:12:08.145164    3636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:12:08.146731    3636 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:12:08.146777    3636 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:12:08.151078    3636 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:12:08.157969    3636 start.go:297] selected driver: qemu2
	I0731 10:12:08.157977    3636 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:12:08.157985    3636 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:12:08.159756    3636 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:12:08.163186    3636 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:12:08.166416    3636 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:12:08.166440    3636 cni.go:84] Creating CNI manager for ""
	I0731 10:12:08.166458    3636 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:12:08.166463    3636 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:12:08.166499    3636 start.go:340] cluster config:
	{Name:offline-docker-290000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-290000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:12:08.170575    3636 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:12:08.178136    3636 out.go:177] * Starting "offline-docker-290000" primary control-plane node in "offline-docker-290000" cluster
	I0731 10:12:08.182155    3636 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:12:08.182182    3636 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:12:08.182190    3636 cache.go:56] Caching tarball of preloaded images
	I0731 10:12:08.182257    3636 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:12:08.182263    3636 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:12:08.182329    3636 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/offline-docker-290000/config.json ...
	I0731 10:12:08.182339    3636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/offline-docker-290000/config.json: {Name:mk9c41d6fa2545965764cbc47fc6a47c2ce22180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:12:08.182625    3636 start.go:360] acquireMachinesLock for offline-docker-290000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:12:08.182655    3636 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "offline-docker-290000"
	I0731 10:12:08.182665    3636 start.go:93] Provisioning new machine with config: &{Name:offline-docker-290000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-290000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:12:08.182701    3636 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:12:08.191196    3636 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:12:08.207138    3636 start.go:159] libmachine.API.Create for "offline-docker-290000" (driver="qemu2")
	I0731 10:12:08.207169    3636 client.go:168] LocalClient.Create starting
	I0731 10:12:08.207246    3636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:12:08.207282    3636 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:08.207293    3636 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:08.207333    3636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:12:08.207356    3636 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:08.207365    3636 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:08.207715    3636 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:12:08.358943    3636 main.go:141] libmachine: Creating SSH key...
	I0731 10:12:08.452343    3636 main.go:141] libmachine: Creating Disk image...
	I0731 10:12:08.452351    3636 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:12:08.452605    3636 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2
	I0731 10:12:08.461905    3636 main.go:141] libmachine: STDOUT: 
	I0731 10:12:08.461929    3636 main.go:141] libmachine: STDERR: 
	I0731 10:12:08.462010    3636 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2 +20000M
	I0731 10:12:08.470977    3636 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:12:08.471008    3636 main.go:141] libmachine: STDERR: 
	I0731 10:12:08.471030    3636 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2
	I0731 10:12:08.471035    3636 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:12:08.471048    3636 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:12:08.471071    3636 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:ab:04:66:2e:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2
	I0731 10:12:08.473192    3636 main.go:141] libmachine: STDOUT: 
	I0731 10:12:08.473215    3636 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:12:08.473235    3636 client.go:171] duration metric: took 266.066333ms to LocalClient.Create
	I0731 10:12:10.475270    3636 start.go:128] duration metric: took 2.292602792s to createHost
	I0731 10:12:10.475292    3636 start.go:83] releasing machines lock for "offline-docker-290000", held for 2.292671833s
	W0731 10:12:10.475308    3636 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:10.492058    3636 out.go:177] * Deleting "offline-docker-290000" in qemu2 ...
	W0731 10:12:10.502010    3636 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:10.502025    3636 start.go:729] Will try again in 5 seconds ...
	I0731 10:12:15.504164    3636 start.go:360] acquireMachinesLock for offline-docker-290000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:12:15.504587    3636 start.go:364] duration metric: took 313.875µs to acquireMachinesLock for "offline-docker-290000"
	I0731 10:12:15.504706    3636 start.go:93] Provisioning new machine with config: &{Name:offline-docker-290000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-290000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:12:15.505064    3636 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:12:15.520523    3636 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:12:15.571747    3636 start.go:159] libmachine.API.Create for "offline-docker-290000" (driver="qemu2")
	I0731 10:12:15.571803    3636 client.go:168] LocalClient.Create starting
	I0731 10:12:15.571919    3636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:12:15.571979    3636 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:15.571996    3636 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:15.572065    3636 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:12:15.572119    3636 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:15.572132    3636 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:15.572727    3636 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:12:15.734731    3636 main.go:141] libmachine: Creating SSH key...
	I0731 10:12:15.986093    3636 main.go:141] libmachine: Creating Disk image...
	I0731 10:12:15.986101    3636 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:12:15.986358    3636 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2
	I0731 10:12:15.996281    3636 main.go:141] libmachine: STDOUT: 
	I0731 10:12:15.996297    3636 main.go:141] libmachine: STDERR: 
	I0731 10:12:15.996365    3636 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2 +20000M
	I0731 10:12:16.004227    3636 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:12:16.004247    3636 main.go:141] libmachine: STDERR: 
	I0731 10:12:16.004259    3636 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2
	I0731 10:12:16.004265    3636 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:12:16.004275    3636 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:12:16.004301    3636 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:2f:19:8f:fc:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/offline-docker-290000/disk.qcow2
	I0731 10:12:16.005908    3636 main.go:141] libmachine: STDOUT: 
	I0731 10:12:16.005925    3636 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:12:16.005937    3636 client.go:171] duration metric: took 434.1375ms to LocalClient.Create
	I0731 10:12:18.008073    3636 start.go:128] duration metric: took 2.503013417s to createHost
	I0731 10:12:18.008157    3636 start.go:83] releasing machines lock for "offline-docker-290000", held for 2.503588584s
	W0731 10:12:18.008484    3636 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-290000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-290000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:18.026074    3636 out.go:177] 
	W0731 10:12:18.031022    3636 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:12:18.031050    3636 out.go:239] * 
	* 
	W0731 10:12:18.032524    3636 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:12:18.043012    3636 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-290000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-31 10:12:18.053195 -0700 PDT m=+1993.024681543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-290000 -n offline-docker-290000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-290000 -n offline-docker-290000: exit status 7 (48.831917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-290000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-290000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-290000
--- FAIL: TestOffline (10.13s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-669000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-669000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.893666167s)

                                                
                                                
-- stdout --
	* [cert-options-669000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-669000" primary control-plane node in "cert-options-669000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-669000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-669000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-669000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-669000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-669000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.824084ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-669000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-669000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-669000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-669000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-669000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-669000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.4075ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-669000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-669000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-669000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-669000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-669000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-31 10:23:49.84364 -0700 PDT m=+2684.822009876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-669000 -n cert-options-669000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-669000 -n cert-options-669000: exit status 7 (29.776541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-669000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-669000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-669000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (196.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-013000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-013000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.156288833s)

                                                
                                                
-- stdout --
	* [cert-expiration-013000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-013000" primary control-plane node in "cert-expiration-013000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-013000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-013000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-013000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (6.244778542s)

                                                
                                                
-- stdout --
	* [cert-expiration-013000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-013000" primary control-plane node in "cert-expiration-013000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-013000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-013000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-013000" primary control-plane node in "cert-expiration-013000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-31 10:26:43.397642 -0700 PDT m=+2858.378817543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-013000 -n cert-expiration-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-013000 -n cert-expiration-013000: exit status 7 (50.611709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-013000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-013000
--- FAIL: TestCertExpiration (196.56s)

                                                
                                    
x
+
TestDockerFlags (12.67s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-545000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-545000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.432959291s)

                                                
                                                
-- stdout --
	* [docker-flags-545000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-545000" primary control-plane node in "docker-flags-545000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-545000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:23:27.158890    4234 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:23:27.162388    4234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:27.162392    4234 out.go:304] Setting ErrFile to fd 2...
	I0731 10:23:27.162394    4234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:27.162542    4234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:23:27.166784    4234 out.go:298] Setting JSON to false
	I0731 10:23:27.184286    4234 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3171,"bootTime":1722443436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:23:27.184366    4234 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:23:27.188554    4234 out.go:177] * [docker-flags-545000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:23:27.195627    4234 notify.go:220] Checking for updates...
	I0731 10:23:27.198527    4234 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:23:27.206390    4234 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:23:27.214558    4234 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:23:27.221579    4234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:23:27.228537    4234 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:23:27.236560    4234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:23:27.240947    4234 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:23:27.241017    4234 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:23:27.241069    4234 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:23:27.247635    4234 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:23:27.251645    4234 start.go:297] selected driver: qemu2
	I0731 10:23:27.251654    4234 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:23:27.251666    4234 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:23:27.254474    4234 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:23:27.258690    4234 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:23:27.262624    4234 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0731 10:23:27.262654    4234 cni.go:84] Creating CNI manager for ""
	I0731 10:23:27.262665    4234 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:23:27.262670    4234 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:23:27.262714    4234 start.go:340] cluster config:
	{Name:docker-flags-545000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:23:27.267380    4234 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:23:27.278588    4234 out.go:177] * Starting "docker-flags-545000" primary control-plane node in "docker-flags-545000" cluster
	I0731 10:23:27.279841    4234 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:23:27.279860    4234 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:23:27.279872    4234 cache.go:56] Caching tarball of preloaded images
	I0731 10:23:27.279942    4234 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:23:27.279949    4234 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:23:27.280030    4234 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/docker-flags-545000/config.json ...
	I0731 10:23:27.280044    4234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/docker-flags-545000/config.json: {Name:mk430a62143010d82fa8ac619c45a4a465532e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:23:27.280335    4234 start.go:360] acquireMachinesLock for docker-flags-545000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:29.640651    4234 start.go:364] duration metric: took 2.360319916s to acquireMachinesLock for "docker-flags-545000"
	I0731 10:23:29.640788    4234 start.go:93] Provisioning new machine with config: &{Name:docker-flags-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:29.641025    4234 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:29.652862    4234 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:23:29.700930    4234 start.go:159] libmachine.API.Create for "docker-flags-545000" (driver="qemu2")
	I0731 10:23:29.700981    4234 client.go:168] LocalClient.Create starting
	I0731 10:23:29.701113    4234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:29.701167    4234 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:29.701190    4234 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:29.701254    4234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:29.701301    4234 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:29.701327    4234 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:29.702135    4234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:29.863550    4234 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:30.088416    4234 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:30.088424    4234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:30.088670    4234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2
	I0731 10:23:30.098419    4234 main.go:141] libmachine: STDOUT: 
	I0731 10:23:30.098439    4234 main.go:141] libmachine: STDERR: 
	I0731 10:23:30.098496    4234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2 +20000M
	I0731 10:23:30.106290    4234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:30.106303    4234 main.go:141] libmachine: STDERR: 
	I0731 10:23:30.106319    4234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2
	I0731 10:23:30.106324    4234 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:30.106335    4234 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:30.106363    4234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5c:e1:13:11:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2
	I0731 10:23:30.107911    4234 main.go:141] libmachine: STDOUT: 
	I0731 10:23:30.107927    4234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:30.107947    4234 client.go:171] duration metric: took 406.964708ms to LocalClient.Create
	I0731 10:23:32.110080    4234 start.go:128] duration metric: took 2.469064167s to createHost
	I0731 10:23:32.110143    4234 start.go:83] releasing machines lock for "docker-flags-545000", held for 2.469480917s
	W0731 10:23:32.110211    4234 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:32.124419    4234 out.go:177] * Deleting "docker-flags-545000" in qemu2 ...
	W0731 10:23:32.155722    4234 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:32.155752    4234 start.go:729] Will try again in 5 seconds ...
	I0731 10:23:37.157820    4234 start.go:360] acquireMachinesLock for docker-flags-545000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:37.158198    4234 start.go:364] duration metric: took 316.958µs to acquireMachinesLock for "docker-flags-545000"
	I0731 10:23:37.158324    4234 start.go:93] Provisioning new machine with config: &{Name:docker-flags-545000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-545000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:37.158550    4234 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:37.167131    4234 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:23:37.204980    4234 start.go:159] libmachine.API.Create for "docker-flags-545000" (driver="qemu2")
	I0731 10:23:37.205028    4234 client.go:168] LocalClient.Create starting
	I0731 10:23:37.205126    4234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:37.205166    4234 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:37.205200    4234 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:37.205267    4234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:37.205294    4234 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:37.205307    4234 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:37.205877    4234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:37.364461    4234 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:37.496451    4234 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:37.496457    4234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:37.496647    4234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2
	I0731 10:23:37.506037    4234 main.go:141] libmachine: STDOUT: 
	I0731 10:23:37.506052    4234 main.go:141] libmachine: STDERR: 
	I0731 10:23:37.506098    4234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2 +20000M
	I0731 10:23:37.513818    4234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:37.513831    4234 main.go:141] libmachine: STDERR: 
	I0731 10:23:37.513842    4234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2
	I0731 10:23:37.513846    4234 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:37.513867    4234 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:37.513903    4234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:95:bc:68:73:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/docker-flags-545000/disk.qcow2
	I0731 10:23:37.515519    4234 main.go:141] libmachine: STDOUT: 
	I0731 10:23:37.515533    4234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:37.515545    4234 client.go:171] duration metric: took 310.517792ms to LocalClient.Create
	I0731 10:23:39.517677    4234 start.go:128] duration metric: took 2.359140625s to createHost
	I0731 10:23:39.517721    4234 start.go:83] releasing machines lock for "docker-flags-545000", held for 2.359535875s
	W0731 10:23:39.518088    4234 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-545000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-545000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:39.531630    4234 out.go:177] 
	W0731 10:23:39.535307    4234 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:23:39.535342    4234 out.go:239] * 
	* 
	W0731 10:23:39.536995    4234 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:23:39.548753    4234 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-545000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-545000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-545000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.083708ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-545000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-545000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-545000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-545000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-545000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-545000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-545000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-545000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-545000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.522959ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-545000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-545000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-545000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-545000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-545000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-545000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-31 10:23:39.684476 -0700 PDT m=+2674.662681251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-545000 -n docker-flags-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-545000 -n docker-flags-545000: exit status 7 (28.9845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-545000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-545000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-545000
--- FAIL: TestDockerFlags (12.67s)

                                                
                                    
x
+
TestForceSystemdFlag (10.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-507000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-507000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.829386917s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-507000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-507000" primary control-plane node in "force-systemd-flag-507000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-507000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:23:02.005817    4118 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:23:02.005934    4118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:02.005938    4118 out.go:304] Setting ErrFile to fd 2...
	I0731 10:23:02.005940    4118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:02.006058    4118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:23:02.007094    4118 out.go:298] Setting JSON to false
	I0731 10:23:02.022998    4118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3146,"bootTime":1722443436,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:23:02.023102    4118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:23:02.026526    4118 out.go:177] * [force-systemd-flag-507000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:23:02.033321    4118 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:23:02.033415    4118 notify.go:220] Checking for updates...
	I0731 10:23:02.040421    4118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:23:02.041870    4118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:23:02.045460    4118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:23:02.048426    4118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:23:02.051445    4118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:23:02.054745    4118 config.go:182] Loaded profile config "NoKubernetes-330000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0731 10:23:02.054816    4118 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:23:02.054870    4118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:23:02.059460    4118 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:23:02.066468    4118 start.go:297] selected driver: qemu2
	I0731 10:23:02.066474    4118 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:23:02.066480    4118 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:23:02.068754    4118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:23:02.071439    4118 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:23:02.074629    4118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 10:23:02.074658    4118 cni.go:84] Creating CNI manager for ""
	I0731 10:23:02.074666    4118 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:23:02.074670    4118 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:23:02.074716    4118 start.go:340] cluster config:
	{Name:force-systemd-flag-507000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:23:02.078422    4118 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:23:02.086441    4118 out.go:177] * Starting "force-systemd-flag-507000" primary control-plane node in "force-systemd-flag-507000" cluster
	I0731 10:23:02.089337    4118 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:23:02.089358    4118 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:23:02.089372    4118 cache.go:56] Caching tarball of preloaded images
	I0731 10:23:02.089432    4118 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:23:02.089439    4118 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:23:02.089504    4118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/force-systemd-flag-507000/config.json ...
	I0731 10:23:02.089515    4118 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/force-systemd-flag-507000/config.json: {Name:mkaaf7684762ded7bdbb0cc01f7a69c684146102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:23:02.089734    4118 start.go:360] acquireMachinesLock for force-systemd-flag-507000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:02.089770    4118 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "force-systemd-flag-507000"
	I0731 10:23:02.089780    4118 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:02.089809    4118 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:02.097404    4118 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:23:02.114809    4118 start.go:159] libmachine.API.Create for "force-systemd-flag-507000" (driver="qemu2")
	I0731 10:23:02.114833    4118 client.go:168] LocalClient.Create starting
	I0731 10:23:02.114892    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:02.114925    4118 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:02.114935    4118 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:02.114969    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:02.114998    4118 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:02.115012    4118 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:02.115365    4118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:02.264947    4118 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:02.340232    4118 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:02.340237    4118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:02.340421    4118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2
	I0731 10:23:02.349543    4118 main.go:141] libmachine: STDOUT: 
	I0731 10:23:02.349560    4118 main.go:141] libmachine: STDERR: 
	I0731 10:23:02.349606    4118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2 +20000M
	I0731 10:23:02.357357    4118 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:02.357371    4118 main.go:141] libmachine: STDERR: 
	I0731 10:23:02.357391    4118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2
	I0731 10:23:02.357396    4118 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:02.357416    4118 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:02.357441    4118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:5e:ed:1b:ce:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2
	I0731 10:23:02.358954    4118 main.go:141] libmachine: STDOUT: 
	I0731 10:23:02.358970    4118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:02.359008    4118 client.go:171] duration metric: took 244.155542ms to LocalClient.Create
	I0731 10:23:04.361182    4118 start.go:128] duration metric: took 2.271375792s to createHost
	I0731 10:23:04.361288    4118 start.go:83] releasing machines lock for "force-systemd-flag-507000", held for 2.271544375s
	W0731 10:23:04.361392    4118 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:04.378741    4118 out.go:177] * Deleting "force-systemd-flag-507000" in qemu2 ...
	W0731 10:23:04.406189    4118 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:04.406222    4118 start.go:729] Will try again in 5 seconds ...
	I0731 10:23:09.408415    4118 start.go:360] acquireMachinesLock for force-systemd-flag-507000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:09.408852    4118 start.go:364] duration metric: took 343.334µs to acquireMachinesLock for "force-systemd-flag-507000"
	I0731 10:23:09.408922    4118 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:09.409228    4118 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:09.418960    4118 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:23:09.469484    4118 start.go:159] libmachine.API.Create for "force-systemd-flag-507000" (driver="qemu2")
	I0731 10:23:09.469532    4118 client.go:168] LocalClient.Create starting
	I0731 10:23:09.469649    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:09.469715    4118 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:09.469730    4118 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:09.469786    4118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:09.469829    4118 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:09.469843    4118 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:09.470302    4118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:09.644778    4118 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:09.739591    4118 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:09.739597    4118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:09.739775    4118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2
	I0731 10:23:09.749164    4118 main.go:141] libmachine: STDOUT: 
	I0731 10:23:09.749181    4118 main.go:141] libmachine: STDERR: 
	I0731 10:23:09.749224    4118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2 +20000M
	I0731 10:23:09.757096    4118 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:09.757113    4118 main.go:141] libmachine: STDERR: 
	I0731 10:23:09.757127    4118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2
	I0731 10:23:09.757139    4118 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:09.757151    4118 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:09.757179    4118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:bd:c8:b4:69:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-flag-507000/disk.qcow2
	I0731 10:23:09.758834    4118 main.go:141] libmachine: STDOUT: 
	I0731 10:23:09.758849    4118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:09.758861    4118 client.go:171] duration metric: took 289.327667ms to LocalClient.Create
	I0731 10:23:11.761048    4118 start.go:128] duration metric: took 2.351820334s to createHost
	I0731 10:23:11.761144    4118 start.go:83] releasing machines lock for "force-systemd-flag-507000", held for 2.352306292s
	W0731 10:23:11.761541    4118 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:11.774196    4118 out.go:177] 
	W0731 10:23:11.782204    4118 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:23:11.782237    4118 out.go:239] * 
	* 
	W0731 10:23:11.784611    4118 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:23:11.792193    4118 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-507000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-507000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-507000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.842708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-507000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-507000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-507000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-31 10:23:11.887928 -0700 PDT m=+2646.865684751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-507000 -n force-systemd-flag-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-507000 -n force-systemd-flag-507000: exit status 7 (34.26875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-507000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-507000
--- FAIL: TestForceSystemdFlag (10.02s)

                                                
                                    
x
+
TestForceSystemdEnv (10.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-654000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-654000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.905102958s)

                                                
                                                
-- stdout --
	* [force-systemd-env-654000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-654000" primary control-plane node in "force-systemd-env-654000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-654000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:23:17.021816    4187 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:23:17.021962    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:17.021966    4187 out.go:304] Setting ErrFile to fd 2...
	I0731 10:23:17.021968    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:17.022095    4187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:23:17.023302    4187 out.go:298] Setting JSON to false
	I0731 10:23:17.040928    4187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3161,"bootTime":1722443436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:23:17.041007    4187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:23:17.045846    4187 out.go:177] * [force-systemd-env-654000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:23:17.053903    4187 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:23:17.054030    4187 notify.go:220] Checking for updates...
	I0731 10:23:17.060828    4187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:23:17.063905    4187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:23:17.066850    4187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:23:17.069800    4187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:23:17.072854    4187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0731 10:23:17.076168    4187 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:23:17.076222    4187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:23:17.080873    4187 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:23:17.087805    4187 start.go:297] selected driver: qemu2
	I0731 10:23:17.087810    4187 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:23:17.087815    4187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:23:17.090003    4187 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:23:17.092828    4187 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:23:17.095932    4187 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 10:23:17.095946    4187 cni.go:84] Creating CNI manager for ""
	I0731 10:23:17.095952    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:23:17.095956    4187 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:23:17.095995    4187 start.go:340] cluster config:
	{Name:force-systemd-env-654000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:23:17.099650    4187 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:23:17.106815    4187 out.go:177] * Starting "force-systemd-env-654000" primary control-plane node in "force-systemd-env-654000" cluster
	I0731 10:23:17.110653    4187 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:23:17.110668    4187 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:23:17.110676    4187 cache.go:56] Caching tarball of preloaded images
	I0731 10:23:17.110734    4187 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:23:17.110740    4187 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:23:17.110790    4187 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/force-systemd-env-654000/config.json ...
	I0731 10:23:17.110804    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/force-systemd-env-654000/config.json: {Name:mk15c19296461ac4d7e81a9a9fd843493ec020a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:23:17.111010    4187 start.go:360] acquireMachinesLock for force-systemd-env-654000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:17.111041    4187 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "force-systemd-env-654000"
	I0731 10:23:17.111051    4187 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:17.111072    4187 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:17.118843    4187 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:23:17.134350    4187 start.go:159] libmachine.API.Create for "force-systemd-env-654000" (driver="qemu2")
	I0731 10:23:17.134387    4187 client.go:168] LocalClient.Create starting
	I0731 10:23:17.134454    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:17.134483    4187 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:17.134499    4187 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:17.134539    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:17.134560    4187 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:17.134568    4187 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:17.134912    4187 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:17.326002    4187 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:17.465791    4187 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:17.465803    4187 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:17.466021    4187 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2
	I0731 10:23:17.476390    4187 main.go:141] libmachine: STDOUT: 
	I0731 10:23:17.476420    4187 main.go:141] libmachine: STDERR: 
	I0731 10:23:17.476505    4187 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2 +20000M
	I0731 10:23:17.485778    4187 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:17.485797    4187 main.go:141] libmachine: STDERR: 
	I0731 10:23:17.485840    4187 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2
	I0731 10:23:17.485846    4187 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:17.485860    4187 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:17.485886    4187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:1f:14:f3:0a:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2
	I0731 10:23:17.488172    4187 main.go:141] libmachine: STDOUT: 
	I0731 10:23:17.488197    4187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:17.488218    4187 client.go:171] duration metric: took 353.831083ms to LocalClient.Create
	I0731 10:23:19.490408    4187 start.go:128] duration metric: took 2.379341084s to createHost
	I0731 10:23:19.490476    4187 start.go:83] releasing machines lock for "force-systemd-env-654000", held for 2.379464625s
	W0731 10:23:19.490573    4187 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:19.501716    4187 out.go:177] * Deleting "force-systemd-env-654000" in qemu2 ...
	W0731 10:23:19.529944    4187 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:19.529973    4187 start.go:729] Will try again in 5 seconds ...
	I0731 10:23:24.531265    4187 start.go:360] acquireMachinesLock for force-systemd-env-654000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:24.531796    4187 start.go:364] duration metric: took 425.042µs to acquireMachinesLock for "force-systemd-env-654000"
	I0731 10:23:24.531951    4187 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-654000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:24.532179    4187 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:24.540820    4187 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 10:23:24.592892    4187 start.go:159] libmachine.API.Create for "force-systemd-env-654000" (driver="qemu2")
	I0731 10:23:24.592954    4187 client.go:168] LocalClient.Create starting
	I0731 10:23:24.593071    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:24.593129    4187 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:24.593146    4187 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:24.593200    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:24.593263    4187 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:24.593274    4187 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:24.593776    4187 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:24.757008    4187 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:24.836850    4187 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:24.836857    4187 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:24.837028    4187 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2
	I0731 10:23:24.846560    4187 main.go:141] libmachine: STDOUT: 
	I0731 10:23:24.846580    4187 main.go:141] libmachine: STDERR: 
	I0731 10:23:24.846640    4187 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2 +20000M
	I0731 10:23:24.854641    4187 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:24.854654    4187 main.go:141] libmachine: STDERR: 
	I0731 10:23:24.854673    4187 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2
	I0731 10:23:24.854678    4187 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:24.854689    4187 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:24.854711    4187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:da:bb:2f:54:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/force-systemd-env-654000/disk.qcow2
	I0731 10:23:24.856361    4187 main.go:141] libmachine: STDOUT: 
	I0731 10:23:24.856380    4187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:24.856395    4187 client.go:171] duration metric: took 263.440667ms to LocalClient.Create
	I0731 10:23:26.858593    4187 start.go:128] duration metric: took 2.326417208s to createHost
	I0731 10:23:26.858637    4187 start.go:83] releasing machines lock for "force-systemd-env-654000", held for 2.326847416s
	W0731 10:23:26.859008    4187 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:26.867598    4187 out.go:177] 
	W0731 10:23:26.871681    4187 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:23:26.871734    4187 out.go:239] * 
	* 
	W0731 10:23:26.874631    4187 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:23:26.886587    4187 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-654000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-654000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-654000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (64.898458ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-654000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-654000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-654000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-31 10:23:26.964061 -0700 PDT m=+2661.942060585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-654000 -n force-systemd-env-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-654000 -n force-systemd-env-654000: exit status 7 (32.086084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-654000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-654000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-654000
--- FAIL: TestForceSystemdEnv (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-966000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-966000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-fz5nc" [69532e43-ffa9-4997-8ab7-7b8dfafae59c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-fz5nc" [69532e43-ffa9-4997-8ab7-7b8dfafae59c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0040205s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30206
functional_test.go:1657: error fetching http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30206: Get "http://192.168.105.4:30206": dial tcp 192.168.105.4:30206: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-966000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-fz5nc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-966000/192.168.105.4
Start Time:       Wed, 31 Jul 2024 09:48:25 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://99a91f439eada68df6e1dbf911b95cb6d3485d929827dfd9cc1f4b60f0c2f633
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 31 Jul 2024 09:48:44 -0700
Finished:     Wed, 31 Jul 2024 09:48:44 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2nf9x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2nf9x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  32s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-fz5nc to functional-966000
Normal   Pulling    32s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     29s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.22s (3.22s including waiting). Image size: 84957542 bytes.
Normal   Created    14s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 29s)  kubelet            Started container echoserver-arm
Normal   Pulled     14s (x2 over 28s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    1s (x4 over 27s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-fz5nc_default(69532e43-ffa9-4997-8ab7-7b8dfafae59c)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-966000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-966000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.6.90
IPs:                      10.101.6.90
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30206/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-966000 -n functional-966000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3956944043/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh -- ls                                                                                         | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh cat                                                                                           | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | /mount-9p/test-1722444526450555000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh stat                                                                                          | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh stat                                                                                          | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh sudo                                                                                          | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port198263675/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh -- ls                                                                                         | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh sudo                                                                                          | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-966000 ssh findmnt                                                                                       | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT | 31 Jul 24 09:48 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-966000                                                                                                | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-966000 --dry-run                                                                                      | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-966000 | jenkins | v1.33.1 | 31 Jul 24 09:48 PDT |                     |
	|           | -p functional-966000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 09:48:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 09:48:53.637826    2209 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:48:53.637947    2209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:48:53.637951    2209 out.go:304] Setting ErrFile to fd 2...
	I0731 09:48:53.637953    2209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:48:53.638110    2209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:48:53.639098    2209 out.go:298] Setting JSON to false
	I0731 09:48:53.655527    2209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1097,"bootTime":1722443436,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 09:48:53.655597    2209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 09:48:53.660293    2209 out.go:177] * [functional-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 09:48:53.668238    2209 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 09:48:53.668326    2209 notify.go:220] Checking for updates...
	I0731 09:48:53.675195    2209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 09:48:53.678267    2209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 09:48:53.681257    2209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 09:48:53.684221    2209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 09:48:53.687278    2209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 09:48:53.690425    2209 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 09:48:53.690699    2209 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 09:48:53.694183    2209 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 09:48:53.701198    2209 start.go:297] selected driver: qemu2
	I0731 09:48:53.701203    2209 start.go:901] validating driver "qemu2" against &{Name:functional-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:48:53.701248    2209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 09:48:53.703363    2209 cni.go:84] Creating CNI manager for ""
	I0731 09:48:53.703503    2209 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 09:48:53.703558    2209 start.go:340] cluster config:
	{Name:functional-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-966000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:48:53.714270    2209 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Jul 31 16:48:48 functional-966000 dockerd[6047]: time="2024-07-31T16:48:48.767212964Z" level=warning msg="cleaning up after shim disconnected" id=4e5efd886ecef1de27f2d119912b02a2a883f69a84d4865e0c813581f595a19a namespace=moby
	Jul 31 16:48:48 functional-966000 dockerd[6047]: time="2024-07-31T16:48:48.767217379Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 16:48:50 functional-966000 dockerd[6041]: time="2024-07-31T16:48:50.246176388Z" level=info msg="ignoring event" container=71d3c2f087e1604bd224dd661ca85507c0f3d193845b225b60bc6a85fa67a286 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 16:48:50 functional-966000 dockerd[6047]: time="2024-07-31T16:48:50.246240112Z" level=info msg="shim disconnected" id=71d3c2f087e1604bd224dd661ca85507c0f3d193845b225b60bc6a85fa67a286 namespace=moby
	Jul 31 16:48:50 functional-966000 dockerd[6047]: time="2024-07-31T16:48:50.246268392Z" level=warning msg="cleaning up after shim disconnected" id=71d3c2f087e1604bd224dd661ca85507c0f3d193845b225b60bc6a85fa67a286 namespace=moby
	Jul 31 16:48:50 functional-966000 dockerd[6047]: time="2024-07-31T16:48:50.246272307Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 16:48:53 functional-966000 dockerd[6047]: time="2024-07-31T16:48:53.787950534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 16:48:53 functional-966000 dockerd[6047]: time="2024-07-31T16:48:53.787995890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 16:48:53 functional-966000 dockerd[6047]: time="2024-07-31T16:48:53.788006928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 16:48:53 functional-966000 dockerd[6047]: time="2024-07-31T16:48:53.788054325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 16:48:53 functional-966000 dockerd[6041]: time="2024-07-31T16:48:53.817917853Z" level=info msg="ignoring event" container=2f683363b7a46f2564756a2fa297a837ab6ec503dbc67220acabd3237993f2bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 16:48:53 functional-966000 dockerd[6047]: time="2024-07-31T16:48:53.819008285Z" level=info msg="shim disconnected" id=2f683363b7a46f2564756a2fa297a837ab6ec503dbc67220acabd3237993f2bc namespace=moby
	Jul 31 16:48:53 functional-966000 dockerd[6047]: time="2024-07-31T16:48:53.819041438Z" level=warning msg="cleaning up after shim disconnected" id=2f683363b7a46f2564756a2fa297a837ab6ec503dbc67220acabd3237993f2bc namespace=moby
	Jul 31 16:48:53 functional-966000 dockerd[6047]: time="2024-07-31T16:48:53.819045603Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.557118749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.557157275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.557165314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.557194677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.569867941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.569947200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.569961153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 16:48:54 functional-966000 dockerd[6047]: time="2024-07-31T16:48:54.570098347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 16:48:54 functional-966000 cri-dockerd[6295]: time="2024-07-31T16:48:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/da7b73e2f4762b3c5351da281190649ed913234375689834a053fe1ce0aacee2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 16:48:54 functional-966000 cri-dockerd[6295]: time="2024-07-31T16:48:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/92060cd8ce00c80ad575458189d35d069fcd8b2668572e79791cc1b0a399e2f8/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 16:48:54 functional-966000 dockerd[6041]: time="2024-07-31T16:48:54.851203580Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2f683363b7a46       72565bf5bbedf                                                                                         5 seconds ago        Exited              echoserver-arm            2                   a91acba890e0d       hello-node-65f5d5cc78-ncd66
	4e5efd886ecef       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 seconds ago       Exited              mount-munger              0                   71d3c2f087e16       busybox-mount
	99a91f439eada       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   8d23fa12d7194       hello-node-connect-6f49f58cd5-fz5nc
	83b9d9d7d1043       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         25 seconds ago       Running             myfrontend                0                   1164f759e8118       sp-pod
	0f20db7c33920       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         40 seconds ago       Running             nginx                     0                   4f57de4a7d9c4       nginx-svc
	5c96a697cf5c9       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   08410817729a3       coredns-7db6d8ff4d-7w4ph
	d8ad3000bc9e5       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   e8b6d42b7f942       storage-provisioner
	2ab43319adc99       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   1926458bc4bf3       kube-proxy-sm5cf
	f41792da8965f       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   f24d79f9da734       kube-scheduler-functional-966000
	dfebe1e8912b4       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   9663f796b0f86       kube-controller-manager-functional-966000
	3ae90f2f979d5       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   670afc2f4a590       etcd-functional-966000
	885939ace595b       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   003566bf9c6a1       kube-apiserver-functional-966000
	7e6dcc92a4adc       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   b5854ae3c38b6       coredns-7db6d8ff4d-7w4ph
	673eda7f9a099       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   f62f0f64c3582       storage-provisioner
	97c72d7584107       2351f570ed0ea                                                                                         About a minute ago   Exited              kube-proxy                1                   a1c7bbf74f43c       kube-proxy-sm5cf
	dd26855cc1e81       014faa467e297                                                                                         About a minute ago   Exited              etcd                      1                   8a50519a212f8       etcd-functional-966000
	fa0567061caf6       8e97cdb19e7cc                                                                                         About a minute ago   Exited              kube-controller-manager   1                   2f6627b3056da       kube-controller-manager-functional-966000
	18b42b4ffcd69       d48f992a22722                                                                                         About a minute ago   Exited              kube-scheduler            1                   390604cc12df5       kube-scheduler-functional-966000
	
	
	==> coredns [5c96a697cf5c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44497 - 39293 "HINFO IN 2675860079863009578.7827548809261945128. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024880621s
	[INFO] 10.244.0.1:35371 - 55480 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000096126s
	[INFO] 10.244.0.1:6351 - 41723 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000094127s
	[INFO] 10.244.0.1:51176 - 3617 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000046522s
	[INFO] 10.244.0.1:13686 - 34696 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000956138s
	[INFO] 10.244.0.1:7800 - 26524 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000058433s
	[INFO] 10.244.0.1:62987 - 7143 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000034735s
	
	
	==> coredns [7e6dcc92a4ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34839 - 58298 "HINFO IN 680455955652567842.2217753438047921381. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.00504803s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-966000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-966000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=functional-966000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T09_46_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 16:46:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-966000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 16:48:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 16:48:53 +0000   Wed, 31 Jul 2024 16:46:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 16:48:53 +0000   Wed, 31 Jul 2024 16:46:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 16:48:53 +0000   Wed, 31 Jul 2024 16:46:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 16:48:53 +0000   Wed, 31 Jul 2024 16:46:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-966000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 859f6b2245624a4ebb34c531838a2d5e
	  System UUID:                859f6b2245624a4ebb34c531838a2d5e
	  Boot ID:                    870b8d1e-7950-4c85-991e-c249d7ca0714
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-ncd66                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     hello-node-connect-6f49f58cd5-fz5nc          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 coredns-7db6d8ff4d-7w4ph                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m12s
	  kube-system                 etcd-functional-966000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-apiserver-functional-966000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-controller-manager-functional-966000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 kube-proxy-sm5cf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-scheduler-functional-966000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-9vjpm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-sl9g4        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  Starting                 65s                    kube-proxy       
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node functional-966000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node functional-966000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s (x7 over 2m30s)  kubelet          Node functional-966000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m26s                  kubelet          Node functional-966000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m26s                  kubelet          Node functional-966000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s                  kubelet          Node functional-966000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m22s                  kubelet          Node functional-966000 status is now: NodeReady
	  Normal  RegisteredNode           2m12s                  node-controller  Node functional-966000 event: Registered Node functional-966000 in Controller
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)    kubelet          Node functional-966000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)    kubelet          Node functional-966000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 114s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)    kubelet          Node functional-966000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                    node-controller  Node functional-966000 event: Registered Node functional-966000 in Controller
	  Normal  Starting                 70s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node functional-966000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node functional-966000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x7 over 70s)      kubelet          Node functional-966000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           54s                    node-controller  Node functional-966000 event: Registered Node functional-966000 in Controller
	
	
	==> dmesg <==
	[  +3.192355] systemd-fstab-generator[5126]: Ignoring "noauto" option for root device
	[ +10.834650] systemd-fstab-generator[5573]: Ignoring "noauto" option for root device
	[  +0.055341] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.098064] systemd-fstab-generator[5607]: Ignoring "noauto" option for root device
	[  +0.110637] systemd-fstab-generator[5619]: Ignoring "noauto" option for root device
	[  +0.110954] systemd-fstab-generator[5633]: Ignoring "noauto" option for root device
	[  +5.138157] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.377935] systemd-fstab-generator[6248]: Ignoring "noauto" option for root device
	[  +0.083361] systemd-fstab-generator[6260]: Ignoring "noauto" option for root device
	[  +0.086517] systemd-fstab-generator[6272]: Ignoring "noauto" option for root device
	[  +0.097901] systemd-fstab-generator[6287]: Ignoring "noauto" option for root device
	[  +0.219104] systemd-fstab-generator[6454]: Ignoring "noauto" option for root device
	[  +1.210912] systemd-fstab-generator[6580]: Ignoring "noauto" option for root device
	[  +1.015968] kauditd_printk_skb: 179 callbacks suppressed
	[Jul31 16:48] kauditd_printk_skb: 51 callbacks suppressed
	[  +1.093644] systemd-fstab-generator[7579]: Ignoring "noauto" option for root device
	[  +4.729737] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.286035] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.282807] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.165828] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.663762] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.561877] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.503893] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.399634] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.083274] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [3ae90f2f979d] <==
	{"level":"info","ts":"2024-07-31T16:47:49.863874Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T16:47:49.863936Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T16:47:49.864509Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T16:47:49.86462Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T16:47:49.864651Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T16:47:49.864749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-31T16:47:49.864803Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-31T16:47:49.864859Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T16:47:49.86489Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T16:47:49.865754Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T16:47:49.86609Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T16:47:51.037504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T16:47:51.037673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T16:47:51.037716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T16:47:51.03775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T16:47:51.037771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-31T16:47:51.037816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-31T16:47:51.037842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-31T16:47:51.043313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T16:47:51.043727Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T16:47:51.044002Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T16:47:51.044029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T16:47:51.043304Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-966000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T16:47:51.04753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-31T16:47:51.047933Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [dd26855cc1e8] <==
	{"level":"info","ts":"2024-07-31T16:47:05.51155Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T16:47:06.490368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T16:47:06.490411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T16:47:06.490429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-31T16:47:06.490447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T16:47:06.490457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T16:47:06.490465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T16:47:06.490476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T16:47:06.492131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T16:47:06.492271Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T16:47:06.492447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T16:47:06.492458Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T16:47:06.493459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T16:47:06.492128Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-966000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T16:47:06.498597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-31T16:47:34.519468Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T16:47:34.519491Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-966000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-31T16:47:34.519528Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T16:47:34.519564Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T16:47:34.525588Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T16:47:34.525605Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T16:47:34.525635Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-31T16:47:34.527049Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T16:47:34.527088Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T16:47:34.527092Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-966000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 16:48:58 up 2 min,  0 users,  load average: 1.10, 1.08, 0.46
	Linux functional-966000 5.10.207 #1 SMP PREEMPT Mon Jul 29 12:07:32 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [885939ace595] <==
	I0731 16:47:51.676175       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 16:47:51.676183       1 aggregator.go:165] initial CRD sync complete...
	I0731 16:47:51.676186       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 16:47:51.676188       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 16:47:51.676190       1 cache.go:39] Caches are synced for autoregister controller
	I0731 16:47:51.686327       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 16:47:51.686333       1 policy_source.go:224] refreshing policies
	I0731 16:47:51.687417       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 16:47:51.701334       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 16:47:52.569795       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 16:47:52.847466       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 16:47:52.861906       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 16:47:52.875286       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 16:47:52.882501       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 16:47:52.884868       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 16:48:04.506431       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 16:48:04.516642       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 16:48:10.328071       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.44.112"}
	I0731 16:48:15.280062       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.247.248"}
	I0731 16:48:25.705191       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 16:48:25.747449       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.6.90"}
	I0731 16:48:39.276192       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.226.114"}
	I0731 16:48:54.165165       1 controller.go:615] quota admission added evaluator for: namespaces
	I0731 16:48:54.249589       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.228.224"}
	I0731 16:48:54.257498       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.6.10"}
	
	
	==> kube-controller-manager [dfebe1e8912b] <==
	I0731 16:48:39.252588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="15.41µs"
	I0731 16:48:40.115434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="31.904µs"
	I0731 16:48:41.123904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.532µs"
	I0731 16:48:45.146984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="32.612µs"
	I0731 16:48:53.754607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="26.698µs"
	I0731 16:48:54.196244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.37323ms"
	E0731 16:48:54.196262       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 16:48:54.207013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="18.852043ms"
	E0731 16:48:54.207096       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 16:48:54.208273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="11.995831ms"
	E0731 16:48:54.208281       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 16:48:54.213404       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="32.403µs"
	I0731 16:48:54.215054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="6.764041ms"
	E0731 16:48:54.215114       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 16:48:54.215612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.491465ms"
	E0731 16:48:54.215693       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 16:48:54.221326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="4.955152ms"
	I0731 16:48:54.231928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.717538ms"
	I0731 16:48:54.232355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.008899ms"
	I0731 16:48:54.232493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="125.657µs"
	I0731 16:48:54.237686       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="13.286µs"
	I0731 16:48:54.238650       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="6.699359ms"
	I0731 16:48:54.238727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="14.077µs"
	I0731 16:48:54.244001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="22.783µs"
	I0731 16:48:57.749809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.158µs"
	
	
	==> kube-controller-manager [fa0567061caf] <==
	I0731 16:47:19.929190       1 shared_informer.go:320] Caches are synced for GC
	I0731 16:47:19.944748       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 16:47:19.963921       1 shared_informer.go:320] Caches are synced for deployment
	I0731 16:47:19.964184       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 16:47:20.014663       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 16:47:20.014678       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0731 16:47:20.014687       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0731 16:47:20.014851       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.943µs"
	I0731 16:47:20.016832       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 16:47:20.017906       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 16:47:20.018065       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 16:47:20.019820       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 16:47:20.019831       1 shared_informer.go:320] Caches are synced for daemon sets
	I0731 16:47:20.020440       1 shared_informer.go:320] Caches are synced for job
	I0731 16:47:20.023146       1 shared_informer.go:320] Caches are synced for disruption
	I0731 16:47:20.024228       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 16:47:20.024306       1 shared_informer.go:320] Caches are synced for taint
	I0731 16:47:20.024365       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 16:47:20.024415       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-966000"
	I0731 16:47:20.026377       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 16:47:20.027630       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 16:47:20.028197       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 16:47:20.434751       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 16:47:20.521036       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 16:47:20.521081       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2ab43319adc9] <==
	I0731 16:47:53.281544       1 server_linux.go:69] "Using iptables proxy"
	I0731 16:47:53.344438       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0731 16:47:53.352913       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 16:47:53.352935       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 16:47:53.352943       1 server_linux.go:165] "Using iptables Proxier"
	I0731 16:47:53.353670       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 16:47:53.353773       1 server.go:872] "Version info" version="v1.30.3"
	I0731 16:47:53.353781       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:47:53.354222       1 config.go:192] "Starting service config controller"
	I0731 16:47:53.354234       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 16:47:53.354244       1 config.go:101] "Starting endpoint slice config controller"
	I0731 16:47:53.354269       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 16:47:53.354475       1 config.go:319] "Starting node config controller"
	I0731 16:47:53.354500       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 16:47:53.454461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 16:47:53.454527       1 shared_informer.go:320] Caches are synced for node config
	I0731 16:47:53.454490       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [97c72d758410] <==
	I0731 16:47:08.309722       1 server_linux.go:69] "Using iptables proxy"
	I0731 16:47:08.420554       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0731 16:47:08.436467       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 16:47:08.436485       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 16:47:08.436495       1 server_linux.go:165] "Using iptables Proxier"
	I0731 16:47:08.437185       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 16:47:08.437248       1 server.go:872] "Version info" version="v1.30.3"
	I0731 16:47:08.437255       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:47:08.438727       1 config.go:192] "Starting service config controller"
	I0731 16:47:08.438760       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 16:47:08.438808       1 config.go:101] "Starting endpoint slice config controller"
	I0731 16:47:08.438824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 16:47:08.439205       1 config.go:319] "Starting node config controller"
	I0731 16:47:08.439215       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 16:47:08.539771       1 shared_informer.go:320] Caches are synced for node config
	I0731 16:47:08.539771       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 16:47:08.539785       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [18b42b4ffcd6] <==
	I0731 16:47:06.109899       1 serving.go:380] Generated self-signed cert in-memory
	W0731 16:47:07.006134       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 16:47:07.006150       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 16:47:07.006155       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 16:47:07.006157       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 16:47:07.023327       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 16:47:07.023406       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:47:07.024148       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 16:47:07.024220       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 16:47:07.024262       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 16:47:07.024289       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 16:47:07.124865       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 16:47:34.544271       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0731 16:47:34.544300       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0731 16:47:34.544358       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f41792da8965] <==
	I0731 16:47:50.128813       1 serving.go:380] Generated self-signed cert in-memory
	W0731 16:47:51.592347       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 16:47:51.592361       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 16:47:51.592365       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 16:47:51.592368       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 16:47:51.624132       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 16:47:51.624145       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 16:47:51.624812       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 16:47:51.624831       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 16:47:51.624935       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 16:47:51.625001       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 16:47:51.725777       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 16:48:48 functional-966000 kubelet[6587]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 16:48:48 functional-966000 kubelet[6587]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 16:48:48 functional-966000 kubelet[6587]: I0731 16:48:48.818811    6587 scope.go:117] "RemoveContainer" containerID="be847e135f5a0a2307beef48422d457528bf90158dd3de8908bd231f640193b2"
	Jul 31 16:48:48 functional-966000 kubelet[6587]: I0731 16:48:48.824683    6587 scope.go:117] "RemoveContainer" containerID="0a67b250d0a19c351a3c65043341e52bf416d011df1b984ee90f3584670156fa"
	Jul 31 16:48:50 functional-966000 kubelet[6587]: I0731 16:48:50.371466    6587 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/6ee051a6-a7b9-4016-8a3a-89808d8224cd-test-volume\") pod \"6ee051a6-a7b9-4016-8a3a-89808d8224cd\" (UID: \"6ee051a6-a7b9-4016-8a3a-89808d8224cd\") "
	Jul 31 16:48:50 functional-966000 kubelet[6587]: I0731 16:48:50.371491    6587 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt6zd\" (UniqueName: \"kubernetes.io/projected/6ee051a6-a7b9-4016-8a3a-89808d8224cd-kube-api-access-lt6zd\") pod \"6ee051a6-a7b9-4016-8a3a-89808d8224cd\" (UID: \"6ee051a6-a7b9-4016-8a3a-89808d8224cd\") "
	Jul 31 16:48:50 functional-966000 kubelet[6587]: I0731 16:48:50.371664    6587 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee051a6-a7b9-4016-8a3a-89808d8224cd-test-volume" (OuterVolumeSpecName: "test-volume") pod "6ee051a6-a7b9-4016-8a3a-89808d8224cd" (UID: "6ee051a6-a7b9-4016-8a3a-89808d8224cd"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 31 16:48:50 functional-966000 kubelet[6587]: I0731 16:48:50.374140    6587 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee051a6-a7b9-4016-8a3a-89808d8224cd-kube-api-access-lt6zd" (OuterVolumeSpecName: "kube-api-access-lt6zd") pod "6ee051a6-a7b9-4016-8a3a-89808d8224cd" (UID: "6ee051a6-a7b9-4016-8a3a-89808d8224cd"). InnerVolumeSpecName "kube-api-access-lt6zd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 16:48:50 functional-966000 kubelet[6587]: I0731 16:48:50.472270    6587 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/6ee051a6-a7b9-4016-8a3a-89808d8224cd-test-volume\") on node \"functional-966000\" DevicePath \"\""
	Jul 31 16:48:50 functional-966000 kubelet[6587]: I0731 16:48:50.472284    6587 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lt6zd\" (UniqueName: \"kubernetes.io/projected/6ee051a6-a7b9-4016-8a3a-89808d8224cd-kube-api-access-lt6zd\") on node \"functional-966000\" DevicePath \"\""
	Jul 31 16:48:51 functional-966000 kubelet[6587]: I0731 16:48:51.181506    6587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71d3c2f087e1604bd224dd661ca85507c0f3d193845b225b60bc6a85fa67a286"
	Jul 31 16:48:53 functional-966000 kubelet[6587]: I0731 16:48:53.745279    6587 scope.go:117] "RemoveContainer" containerID="ac8404269d13d301dcb404608c0a65cc6c5a7a414ab04cc1548812890b57950e"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.198335    6587 scope.go:117] "RemoveContainer" containerID="ac8404269d13d301dcb404608c0a65cc6c5a7a414ab04cc1548812890b57950e"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.198494    6587 scope.go:117] "RemoveContainer" containerID="2f683363b7a46f2564756a2fa297a837ab6ec503dbc67220acabd3237993f2bc"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: E0731 16:48:54.198573    6587 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-ncd66_default(7b7beae5-78d8-4b01-98d3-5feec89aaea9)\"" pod="default/hello-node-65f5d5cc78-ncd66" podUID="7b7beae5-78d8-4b01-98d3-5feec89aaea9"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.222390    6587 topology_manager.go:215] "Topology Admit Handler" podUID="fbe7a3fa-73a0-4aef-aaca-044cfc728ff9" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-sl9g4"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: E0731 16:48:54.222437    6587 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee051a6-a7b9-4016-8a3a-89808d8224cd" containerName="mount-munger"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.222455    6587 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee051a6-a7b9-4016-8a3a-89808d8224cd" containerName="mount-munger"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.235479    6587 topology_manager.go:215] "Topology Admit Handler" podUID="dec5af6a-3eff-4f9f-8c61-404f4d03a512" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-9vjpm"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.297053    6587 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjjf\" (UniqueName: \"kubernetes.io/projected/dec5af6a-3eff-4f9f-8c61-404f4d03a512-kube-api-access-jpjjf\") pod \"dashboard-metrics-scraper-b5fc48f67-9vjpm\" (UID: \"dec5af6a-3eff-4f9f-8c61-404f4d03a512\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9vjpm"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.297080    6587 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wkv6\" (UniqueName: \"kubernetes.io/projected/fbe7a3fa-73a0-4aef-aaca-044cfc728ff9-kube-api-access-8wkv6\") pod \"kubernetes-dashboard-779776cb65-sl9g4\" (UID: \"fbe7a3fa-73a0-4aef-aaca-044cfc728ff9\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-sl9g4"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.297091    6587 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fbe7a3fa-73a0-4aef-aaca-044cfc728ff9-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-sl9g4\" (UID: \"fbe7a3fa-73a0-4aef-aaca-044cfc728ff9\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-sl9g4"
	Jul 31 16:48:54 functional-966000 kubelet[6587]: I0731 16:48:54.297098    6587 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dec5af6a-3eff-4f9f-8c61-404f4d03a512-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-9vjpm\" (UID: \"dec5af6a-3eff-4f9f-8c61-404f4d03a512\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-9vjpm"
	Jul 31 16:48:57 functional-966000 kubelet[6587]: I0731 16:48:57.744619    6587 scope.go:117] "RemoveContainer" containerID="99a91f439eada68df6e1dbf911b95cb6d3485d929827dfd9cc1f4b60f0c2f633"
	Jul 31 16:48:57 functional-966000 kubelet[6587]: E0731 16:48:57.744714    6587 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-fz5nc_default(69532e43-ffa9-4997-8ab7-7b8dfafae59c)\"" pod="default/hello-node-connect-6f49f58cd5-fz5nc" podUID="69532e43-ffa9-4997-8ab7-7b8dfafae59c"
	
	
	==> storage-provisioner [673eda7f9a09] <==
	I0731 16:47:08.298307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 16:47:08.302687       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 16:47:08.302760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 16:47:25.688750       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 16:47:25.688828       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-966000_136d0870-5f04-4b0a-b0fe-d0fd86092f89!
	I0731 16:47:25.689226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d9bf297a-7847-494c-b73e-b80fae1b4cf0", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-966000_136d0870-5f04-4b0a-b0fe-d0fd86092f89 became leader
	I0731 16:47:25.789856       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-966000_136d0870-5f04-4b0a-b0fe-d0fd86092f89!
	
	
	==> storage-provisioner [d8ad3000bc9e] <==
	I0731 16:47:53.250196       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 16:47:53.254437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 16:47:53.254452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 16:48:10.644529       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 16:48:10.645186       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d9bf297a-7847-494c-b73e-b80fae1b4cf0", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-966000_ef429424-3799-4cf5-b82e-edae48af0661 became leader
	I0731 16:48:10.645249       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-966000_ef429424-3799-4cf5-b82e-edae48af0661!
	I0731 16:48:10.746059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-966000_ef429424-3799-4cf5-b82e-edae48af0661!
	I0731 16:48:20.460576       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0731 16:48:20.461043       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"52fdfec2-f6db-41cd-9260-65a196846b28", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0731 16:48:20.461152       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ff3d57c7-54c1-4464-a49e-45c5b0fcb822 384 0 2024-07-31 16:46:47 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-31 16:46:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-52fdfec2-f6db-41cd-9260-65a196846b28 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  52fdfec2-f6db-41cd-9260-65a196846b28 676 0 2024-07-31 16:48:20 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-31 16:48:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-31 16:48:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0731 16:48:20.461909       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-52fdfec2-f6db-41cd-9260-65a196846b28" provisioned
	I0731 16:48:20.461941       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0731 16:48:20.461960       1 volume_store.go:212] Trying to save persistentvolume "pvc-52fdfec2-f6db-41cd-9260-65a196846b28"
	I0731 16:48:20.465832       1 volume_store.go:219] persistentvolume "pvc-52fdfec2-f6db-41cd-9260-65a196846b28" saved
	I0731 16:48:20.466110       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"52fdfec2-f6db-41cd-9260-65a196846b28", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-52fdfec2-f6db-41cd-9260-65a196846b28
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-966000 -n functional-966000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-966000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-9vjpm kubernetes-dashboard-779776cb65-sl9g4
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-966000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-9vjpm kubernetes-dashboard-779776cb65-sl9g4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-966000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-9vjpm kubernetes-dashboard-779776cb65-sl9g4: exit status 1 (42.990667ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-966000/192.168.105.4
	Start Time:       Wed, 31 Jul 2024 09:48:47 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://4e5efd886ecef1de27f2d119912b02a2a883f69a84d4865e0c813581f595a19a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 31 Jul 2024 09:48:48 -0700
	      Finished:     Wed, 31 Jul 2024 09:48:48 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lt6zd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lt6zd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-966000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.057s (1.057s including waiting). Image size: 3547125 bytes.
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-9vjpm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-sl9g4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-966000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-9vjpm kubernetes-dashboard-779776cb65-sl9g4: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (33.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (205.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-217000 node stop m02 -v=7 --alsologtostderr: (3.164352042s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr
E0731 09:54:36.246610    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:55:58.164227    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:57:03.531893    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr: exit status 7 (2m55.967410792s)

                                                
                                                
-- stdout --
	ha-217000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-217000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-217000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 09:54:07.659094    2639 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:54:07.659258    2639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:54:07.659266    2639 out.go:304] Setting ErrFile to fd 2...
	I0731 09:54:07.659269    2639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:54:07.659393    2639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:54:07.659517    2639 out.go:298] Setting JSON to false
	I0731 09:54:07.659534    2639 mustload.go:65] Loading cluster: ha-217000
	I0731 09:54:07.659569    2639 notify.go:220] Checking for updates...
	I0731 09:54:07.659745    2639 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 09:54:07.659752    2639 status.go:255] checking status of ha-217000 ...
	I0731 09:54:07.660475    2639 status.go:330] ha-217000 host status = "Running" (err=<nil>)
	I0731 09:54:07.660486    2639 host.go:66] Checking if "ha-217000" exists ...
	I0731 09:54:07.660590    2639 host.go:66] Checking if "ha-217000" exists ...
	I0731 09:54:07.660702    2639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 09:54:07.660714    2639 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/id_rsa Username:docker}
	W0731 09:54:33.585115    2639 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0731 09:54:33.585257    2639 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 09:54:33.585277    2639 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 09:54:33.585286    2639 status.go:257] ha-217000 status: &{Name:ha-217000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 09:54:33.585319    2639 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 09:54:33.585328    2639 status.go:255] checking status of ha-217000-m02 ...
	I0731 09:54:33.585740    2639 status.go:330] ha-217000-m02 host status = "Stopped" (err=<nil>)
	I0731 09:54:33.585750    2639 status.go:343] host is not running, skipping remaining checks
	I0731 09:54:33.585756    2639 status.go:257] ha-217000-m02 status: &{Name:ha-217000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 09:54:33.585766    2639 status.go:255] checking status of ha-217000-m03 ...
	I0731 09:54:33.587051    2639 status.go:330] ha-217000-m03 host status = "Running" (err=<nil>)
	I0731 09:54:33.587063    2639 host.go:66] Checking if "ha-217000-m03" exists ...
	I0731 09:54:33.587271    2639 host.go:66] Checking if "ha-217000-m03" exists ...
	I0731 09:54:33.587514    2639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 09:54:33.587534    2639 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m03/id_rsa Username:docker}
	W0731 09:55:48.585818    2639 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0731 09:55:48.585862    2639 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0731 09:55:48.585869    2639 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 09:55:48.585881    2639 status.go:257] ha-217000-m03 status: &{Name:ha-217000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 09:55:48.585889    2639 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 09:55:48.585894    2639 status.go:255] checking status of ha-217000-m04 ...
	I0731 09:55:48.586605    2639 status.go:330] ha-217000-m04 host status = "Running" (err=<nil>)
	I0731 09:55:48.586612    2639 host.go:66] Checking if "ha-217000-m04" exists ...
	I0731 09:55:48.586702    2639 host.go:66] Checking if "ha-217000-m04" exists ...
	I0731 09:55:48.586815    2639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 09:55:48.586821    2639 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m04/id_rsa Username:docker}
	W0731 09:57:03.587534    2639 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0731 09:57:03.587583    2639 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0731 09:57:03.587592    2639 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0731 09:57:03.587595    2639 status.go:257] ha-217000-m04 status: &{Name:ha-217000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 09:57:03.587616    2639 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr": ha-217000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-217000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-217000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr": ha-217000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-217000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-217000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr": ha-217000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-217000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-217000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 3 (25.9626315s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 09:57:29.550443    2672 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 09:57:29.550452    2672 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (205.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (55.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (29.261915125s)
ha_test.go:413: expected profile "ha-217000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-217000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-217000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-217000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
E0731 09:58:14.298480    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 3 (25.95586325s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 09:58:24.767626    2686 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 09:58:24.767636    2686 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (55.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (159.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-217000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.079303667s)

                                                
                                                
-- stdout --
	* Starting "ha-217000-m02" control-plane node in "ha-217000" cluster
	* Restarting existing qemu2 VM for "ha-217000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-217000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 09:58:24.799111    2691 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:58:24.799348    2691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:58:24.799355    2691 out.go:304] Setting ErrFile to fd 2...
	I0731 09:58:24.799357    2691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:58:24.799485    2691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:58:24.799740    2691 mustload.go:65] Loading cluster: ha-217000
	I0731 09:58:24.799968    2691 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 09:58:24.800194    2691 host.go:58] "ha-217000-m02" host status: Stopped
	I0731 09:58:24.804776    2691 out.go:177] * Starting "ha-217000-m02" control-plane node in "ha-217000" cluster
	I0731 09:58:24.807781    2691 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 09:58:24.807796    2691 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 09:58:24.807804    2691 cache.go:56] Caching tarball of preloaded images
	I0731 09:58:24.807879    2691 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 09:58:24.807885    2691 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 09:58:24.807939    2691 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/ha-217000/config.json ...
	I0731 09:58:24.808247    2691 start.go:360] acquireMachinesLock for ha-217000-m02: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 09:58:24.808290    2691 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "ha-217000-m02"
	I0731 09:58:24.808298    2691 start.go:96] Skipping create...Using existing machine configuration
	I0731 09:58:24.808304    2691 fix.go:54] fixHost starting: m02
	I0731 09:58:24.808409    2691 fix.go:112] recreateIfNeeded on ha-217000-m02: state=Stopped err=<nil>
	W0731 09:58:24.808414    2691 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 09:58:24.811668    2691 out.go:177] * Restarting existing qemu2 VM for "ha-217000-m02" ...
	I0731 09:58:24.815791    2691 qemu.go:418] Using hvf for hardware acceleration
	I0731 09:58:24.815834    2691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d3:56:26:37:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/disk.qcow2
	I0731 09:58:24.818092    2691 main.go:141] libmachine: STDOUT: 
	I0731 09:58:24.818109    2691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 09:58:24.818135    2691 fix.go:56] duration metric: took 9.830833ms for fixHost
	I0731 09:58:24.818139    2691 start.go:83] releasing machines lock for "ha-217000-m02", held for 9.84575ms
	W0731 09:58:24.818146    2691 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 09:58:24.818178    2691 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 09:58:24.818182    2691 start.go:729] Will try again in 5 seconds ...
	I0731 09:58:29.820211    2691 start.go:360] acquireMachinesLock for ha-217000-m02: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 09:58:29.820336    2691 start.go:364] duration metric: took 98.875µs to acquireMachinesLock for "ha-217000-m02"
	I0731 09:58:29.820387    2691 start.go:96] Skipping create...Using existing machine configuration
	I0731 09:58:29.820391    2691 fix.go:54] fixHost starting: m02
	I0731 09:58:29.820562    2691 fix.go:112] recreateIfNeeded on ha-217000-m02: state=Stopped err=<nil>
	W0731 09:58:29.820567    2691 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 09:58:29.824294    2691 out.go:177] * Restarting existing qemu2 VM for "ha-217000-m02" ...
	I0731 09:58:29.828257    2691 qemu.go:418] Using hvf for hardware acceleration
	I0731 09:58:29.828319    2691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d3:56:26:37:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/disk.qcow2
	I0731 09:58:29.830407    2691 main.go:141] libmachine: STDOUT: 
	I0731 09:58:29.830424    2691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 09:58:29.830449    2691 fix.go:56] duration metric: took 10.059208ms for fixHost
	I0731 09:58:29.830454    2691 start.go:83] releasing machines lock for "ha-217000-m02", held for 10.112833ms
	W0731 09:58:29.830498    2691 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 09:58:29.834213    2691 out.go:177] 
	W0731 09:58:29.838221    2691 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 09:58:29.838225    2691 out.go:239] * 
	* 
	W0731 09:58:29.839892    2691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 09:58:29.843241    2691 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0731 09:58:24.799111    2691 out.go:291] Setting OutFile to fd 1 ...
I0731 09:58:24.799348    2691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:58:24.799355    2691 out.go:304] Setting ErrFile to fd 2...
I0731 09:58:24.799357    2691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:58:24.799485    2691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
I0731 09:58:24.799740    2691 mustload.go:65] Loading cluster: ha-217000
I0731 09:58:24.799968    2691 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0731 09:58:24.800194    2691 host.go:58] "ha-217000-m02" host status: Stopped
I0731 09:58:24.804776    2691 out.go:177] * Starting "ha-217000-m02" control-plane node in "ha-217000" cluster
I0731 09:58:24.807781    2691 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0731 09:58:24.807796    2691 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0731 09:58:24.807804    2691 cache.go:56] Caching tarball of preloaded images
I0731 09:58:24.807879    2691 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0731 09:58:24.807885    2691 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0731 09:58:24.807939    2691 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/ha-217000/config.json ...
I0731 09:58:24.808247    2691 start.go:360] acquireMachinesLock for ha-217000-m02: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 09:58:24.808290    2691 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "ha-217000-m02"
I0731 09:58:24.808298    2691 start.go:96] Skipping create...Using existing machine configuration
I0731 09:58:24.808304    2691 fix.go:54] fixHost starting: m02
I0731 09:58:24.808409    2691 fix.go:112] recreateIfNeeded on ha-217000-m02: state=Stopped err=<nil>
W0731 09:58:24.808414    2691 fix.go:138] unexpected machine state, will restart: <nil>
I0731 09:58:24.811668    2691 out.go:177] * Restarting existing qemu2 VM for "ha-217000-m02" ...
I0731 09:58:24.815791    2691 qemu.go:418] Using hvf for hardware acceleration
I0731 09:58:24.815834    2691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d3:56:26:37:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/disk.qcow2
I0731 09:58:24.818092    2691 main.go:141] libmachine: STDOUT: 
I0731 09:58:24.818109    2691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 09:58:24.818135    2691 fix.go:56] duration metric: took 9.830833ms for fixHost
I0731 09:58:24.818139    2691 start.go:83] releasing machines lock for "ha-217000-m02", held for 9.84575ms
W0731 09:58:24.818146    2691 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 09:58:24.818178    2691 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 09:58:24.818182    2691 start.go:729] Will try again in 5 seconds ...
I0731 09:58:29.820211    2691 start.go:360] acquireMachinesLock for ha-217000-m02: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 09:58:29.820336    2691 start.go:364] duration metric: took 98.875µs to acquireMachinesLock for "ha-217000-m02"
I0731 09:58:29.820387    2691 start.go:96] Skipping create...Using existing machine configuration
I0731 09:58:29.820391    2691 fix.go:54] fixHost starting: m02
I0731 09:58:29.820562    2691 fix.go:112] recreateIfNeeded on ha-217000-m02: state=Stopped err=<nil>
W0731 09:58:29.820567    2691 fix.go:138] unexpected machine state, will restart: <nil>
I0731 09:58:29.824294    2691 out.go:177] * Restarting existing qemu2 VM for "ha-217000-m02" ...
I0731 09:58:29.828257    2691 qemu.go:418] Using hvf for hardware acceleration
I0731 09:58:29.828319    2691 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d3:56:26:37:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m02/disk.qcow2
I0731 09:58:29.830407    2691 main.go:141] libmachine: STDOUT: 
I0731 09:58:29.830424    2691 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 09:58:29.830449    2691 fix.go:56] duration metric: took 10.059208ms for fixHost
I0731 09:58:29.830454    2691 start.go:83] releasing machines lock for "ha-217000-m02", held for 10.112833ms
W0731 09:58:29.830498    2691 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 09:58:29.834213    2691 out.go:177] 
W0731 09:58:29.838221    2691 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 09:58:29.838225    2691 out.go:239] * 
* 
W0731 09:58:29.839892    2691 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 09:58:29.843241    2691 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-217000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr
E0731 09:58:42.002594    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr: exit status 7 (2m8.564570167s)

                                                
                                                
-- stdout --
	ha-217000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-217000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-217000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 09:58:29.879346    2695 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:58:29.879493    2695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:58:29.879499    2695 out.go:304] Setting ErrFile to fd 2...
	I0731 09:58:29.879502    2695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:58:29.879650    2695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:58:29.879773    2695 out.go:298] Setting JSON to false
	I0731 09:58:29.879785    2695 mustload.go:65] Loading cluster: ha-217000
	I0731 09:58:29.879866    2695 notify.go:220] Checking for updates...
	I0731 09:58:29.880028    2695 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 09:58:29.880036    2695 status.go:255] checking status of ha-217000 ...
	I0731 09:58:29.880724    2695 status.go:330] ha-217000 host status = "Running" (err=<nil>)
	I0731 09:58:29.880736    2695 host.go:66] Checking if "ha-217000" exists ...
	I0731 09:58:29.880826    2695 host.go:66] Checking if "ha-217000" exists ...
	I0731 09:58:29.880935    2695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 09:58:29.880946    2695 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/id_rsa Username:docker}
	W0731 09:58:29.881116    2695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 09:58:29.881134    2695 retry.go:31] will retry after 360.315215ms: dial tcp 192.168.105.5:22: connect: host is down
	W0731 09:58:30.243633    2695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 09:58:30.243658    2695 retry.go:31] will retry after 487.413777ms: dial tcp 192.168.105.5:22: connect: host is down
	W0731 09:58:30.733277    2695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 09:58:30.733300    2695 retry.go:31] will retry after 485.449139ms: dial tcp 192.168.105.5:22: connect: host is down
	W0731 09:58:31.220946    2695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0731 09:58:31.220998    2695 retry.go:31] will retry after 338.61584ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0731 09:58:31.561668    2695 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/id_rsa Username:docker}
	W0731 09:58:57.483615    2695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0731 09:58:57.483740    2695 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 09:58:57.483764    2695 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 09:58:57.483779    2695 status.go:257] ha-217000 status: &{Name:ha-217000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 09:58:57.483822    2695 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 09:58:57.483837    2695 status.go:255] checking status of ha-217000-m02 ...
	I0731 09:58:57.484614    2695 status.go:330] ha-217000-m02 host status = "Stopped" (err=<nil>)
	I0731 09:58:57.484626    2695 status.go:343] host is not running, skipping remaining checks
	I0731 09:58:57.484633    2695 status.go:257] ha-217000-m02 status: &{Name:ha-217000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 09:58:57.484644    2695 status.go:255] checking status of ha-217000-m03 ...
	I0731 09:58:57.486034    2695 status.go:330] ha-217000-m03 host status = "Running" (err=<nil>)
	I0731 09:58:57.486047    2695 host.go:66] Checking if "ha-217000-m03" exists ...
	I0731 09:58:57.486333    2695 host.go:66] Checking if "ha-217000-m03" exists ...
	I0731 09:58:57.486673    2695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 09:58:57.486695    2695 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m03/id_rsa Username:docker}
	W0731 09:59:23.404302    2695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0731 09:59:23.404356    2695 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0731 09:59:23.404364    2695 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 09:59:23.404368    2695 status.go:257] ha-217000-m03 status: &{Name:ha-217000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 09:59:23.404379    2695 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 09:59:23.404387    2695 status.go:255] checking status of ha-217000-m04 ...
	I0731 09:59:23.405115    2695 status.go:330] ha-217000-m04 host status = "Running" (err=<nil>)
	I0731 09:59:23.405123    2695 host.go:66] Checking if "ha-217000-m04" exists ...
	I0731 09:59:23.405231    2695 host.go:66] Checking if "ha-217000-m04" exists ...
	I0731 09:59:23.405376    2695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 09:59:23.405382    2695 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000-m04/id_rsa Username:docker}
	W0731 10:00:38.406218    2695 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0731 10:00:38.406376    2695 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0731 10:00:38.406411    2695 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0731 10:00:38.406428    2695 status.go:257] ha-217000-m04 status: &{Name:ha-217000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 10:00:38.406467    2695 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 3 (25.9874045s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 10:01:04.397075    3015 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 10:01:04.397094    3015 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (159.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (185.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-217000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-217000 -v=7 --alsologtostderr
E0731 10:02:03.526425    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 10:03:14.293712    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 10:03:26.592700    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-217000 -v=7 --alsologtostderr: (2m59.944231166s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-217000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-217000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225652125s)

                                                
                                                
-- stdout --
	* [ha-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-217000" primary control-plane node in "ha-217000" cluster
	* Restarting existing qemu2 VM for "ha-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:04:33.401571    3065 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:04:33.401781    3065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:04:33.401786    3065 out.go:304] Setting ErrFile to fd 2...
	I0731 10:04:33.401789    3065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:04:33.401971    3065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:04:33.403334    3065 out.go:298] Setting JSON to false
	I0731 10:04:33.422522    3065 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2037,"bootTime":1722443436,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:04:33.422594    3065 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:04:33.428353    3065 out.go:177] * [ha-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:04:33.436309    3065 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:04:33.436378    3065 notify.go:220] Checking for updates...
	I0731 10:04:33.444215    3065 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:04:33.445442    3065 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:04:33.448229    3065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:04:33.451238    3065 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:04:33.454262    3065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:04:33.457616    3065 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:04:33.457668    3065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:04:33.462193    3065 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:04:33.469297    3065 start.go:297] selected driver: qemu2
	I0731 10:04:33.469304    3065 start.go:901] validating driver "qemu2" against &{Name:ha-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-217000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:04:33.469394    3065 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:04:33.472210    3065 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:04:33.472251    3065 cni.go:84] Creating CNI manager for ""
	I0731 10:04:33.472256    3065 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 10:04:33.472316    3065 start.go:340] cluster config:
	{Name:ha-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-217000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:04:33.476655    3065 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:04:33.485168    3065 out.go:177] * Starting "ha-217000" primary control-plane node in "ha-217000" cluster
	I0731 10:04:33.489238    3065 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:04:33.489254    3065 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:04:33.489271    3065 cache.go:56] Caching tarball of preloaded images
	I0731 10:04:33.489346    3065 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:04:33.489353    3065 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:04:33.489435    3065 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/ha-217000/config.json ...
	I0731 10:04:33.489875    3065 start.go:360] acquireMachinesLock for ha-217000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:04:33.489911    3065 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "ha-217000"
	I0731 10:04:33.489920    3065 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:04:33.489926    3065 fix.go:54] fixHost starting: 
	I0731 10:04:33.490049    3065 fix.go:112] recreateIfNeeded on ha-217000: state=Stopped err=<nil>
	W0731 10:04:33.490058    3065 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:04:33.494253    3065 out.go:177] * Restarting existing qemu2 VM for "ha-217000" ...
	I0731 10:04:33.502273    3065 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:04:33.502311    3065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:58:d9:d5:a4:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/disk.qcow2
	I0731 10:04:33.504465    3065 main.go:141] libmachine: STDOUT: 
	I0731 10:04:33.504487    3065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:04:33.504515    3065 fix.go:56] duration metric: took 14.590333ms for fixHost
	I0731 10:04:33.504520    3065 start.go:83] releasing machines lock for "ha-217000", held for 14.603917ms
	W0731 10:04:33.504553    3065 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:04:33.504586    3065 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:04:33.504591    3065 start.go:729] Will try again in 5 seconds ...
	I0731 10:04:38.506651    3065 start.go:360] acquireMachinesLock for ha-217000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:04:38.506991    3065 start.go:364] duration metric: took 254.042µs to acquireMachinesLock for "ha-217000"
	I0731 10:04:38.507111    3065 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:04:38.507128    3065 fix.go:54] fixHost starting: 
	I0731 10:04:38.507814    3065 fix.go:112] recreateIfNeeded on ha-217000: state=Stopped err=<nil>
	W0731 10:04:38.507838    3065 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:04:38.512253    3065 out.go:177] * Restarting existing qemu2 VM for "ha-217000" ...
	I0731 10:04:38.520208    3065 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:04:38.520394    3065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:58:d9:d5:a4:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/disk.qcow2
	I0731 10:04:38.529397    3065 main.go:141] libmachine: STDOUT: 
	I0731 10:04:38.529478    3065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:04:38.529536    3065 fix.go:56] duration metric: took 22.409959ms for fixHost
	I0731 10:04:38.529551    3065 start.go:83] releasing machines lock for "ha-217000", held for 22.54175ms
	W0731 10:04:38.529709    3065 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:04:38.537203    3065 out.go:177] 
	W0731 10:04:38.541300    3065 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:04:38.541359    3065 out.go:239] * 
	* 
	W0731 10:04:38.543917    3065 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:04:38.554198    3065 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-217000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-217000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 7 (32.683584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (185.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-217000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.808125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-217000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-217000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:04:38.693510    3078 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:04:38.693728    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:04:38.693731    3078 out.go:304] Setting ErrFile to fd 2...
	I0731 10:04:38.693733    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:04:38.693855    3078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:04:38.694083    3078 mustload.go:65] Loading cluster: ha-217000
	I0731 10:04:38.694326    3078 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 10:04:38.694626    3078 out.go:239] ! The control-plane node ha-217000 host is not running (will try others): state=Stopped
	! The control-plane node ha-217000 host is not running (will try others): state=Stopped
	W0731 10:04:38.694719    3078 out.go:239] ! The control-plane node ha-217000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-217000-m02 host is not running (will try others): state=Stopped
	I0731 10:04:38.699057    3078 out.go:177] * The control-plane node ha-217000-m03 host is not running: state=Stopped
	I0731 10:04:38.702035    3078 out.go:177]   To start a cluster, run: "minikube start -p ha-217000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-217000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr: exit status 7 (28.792958ms)

                                                
                                                
-- stdout --
	ha-217000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:04:38.732723    3080 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:04:38.732857    3080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:04:38.732860    3080 out.go:304] Setting ErrFile to fd 2...
	I0731 10:04:38.732864    3080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:04:38.732973    3080 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:04:38.733105    3080 out.go:298] Setting JSON to false
	I0731 10:04:38.733114    3080 mustload.go:65] Loading cluster: ha-217000
	I0731 10:04:38.733177    3080 notify.go:220] Checking for updates...
	I0731 10:04:38.733355    3080 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:04:38.733364    3080 status.go:255] checking status of ha-217000 ...
	I0731 10:04:38.733568    3080 status.go:330] ha-217000 host status = "Stopped" (err=<nil>)
	I0731 10:04:38.733571    3080 status.go:343] host is not running, skipping remaining checks
	I0731 10:04:38.733573    3080 status.go:257] ha-217000 status: &{Name:ha-217000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:04:38.733583    3080 status.go:255] checking status of ha-217000-m02 ...
	I0731 10:04:38.733669    3080 status.go:330] ha-217000-m02 host status = "Stopped" (err=<nil>)
	I0731 10:04:38.733671    3080 status.go:343] host is not running, skipping remaining checks
	I0731 10:04:38.733673    3080 status.go:257] ha-217000-m02 status: &{Name:ha-217000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:04:38.733677    3080 status.go:255] checking status of ha-217000-m03 ...
	I0731 10:04:38.733769    3080 status.go:330] ha-217000-m03 host status = "Stopped" (err=<nil>)
	I0731 10:04:38.733772    3080 status.go:343] host is not running, skipping remaining checks
	I0731 10:04:38.733774    3080 status.go:257] ha-217000-m03 status: &{Name:ha-217000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:04:38.733777    3080 status.go:255] checking status of ha-217000-m04 ...
	I0731 10:04:38.733871    3080 status.go:330] ha-217000-m04 host status = "Stopped" (err=<nil>)
	I0731 10:04:38.733874    3080 status.go:343] host is not running, skipping remaining checks
	I0731 10:04:38.733876    3080 status.go:257] ha-217000-m04 status: &{Name:ha-217000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 7 (29.605167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-217000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-217000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-217000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-217000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 7 (29.185375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (153s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 stop -v=7 --alsologtostderr
E0731 10:07:03.522669    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-217000 stop -v=7 --alsologtostderr: (2m32.901894208s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr: exit status 7 (64.077041ms)

                                                
                                                
-- stdout --
	ha-217000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-217000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:07:11.800123    3113 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:07:11.800335    3113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:07:11.800339    3113 out.go:304] Setting ErrFile to fd 2...
	I0731 10:07:11.800342    3113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:07:11.800490    3113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:07:11.800642    3113 out.go:298] Setting JSON to false
	I0731 10:07:11.800653    3113 mustload.go:65] Loading cluster: ha-217000
	I0731 10:07:11.800693    3113 notify.go:220] Checking for updates...
	I0731 10:07:11.800941    3113 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:07:11.800950    3113 status.go:255] checking status of ha-217000 ...
	I0731 10:07:11.801238    3113 status.go:330] ha-217000 host status = "Stopped" (err=<nil>)
	I0731 10:07:11.801243    3113 status.go:343] host is not running, skipping remaining checks
	I0731 10:07:11.801246    3113 status.go:257] ha-217000 status: &{Name:ha-217000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:07:11.801260    3113 status.go:255] checking status of ha-217000-m02 ...
	I0731 10:07:11.801388    3113 status.go:330] ha-217000-m02 host status = "Stopped" (err=<nil>)
	I0731 10:07:11.801392    3113 status.go:343] host is not running, skipping remaining checks
	I0731 10:07:11.801395    3113 status.go:257] ha-217000-m02 status: &{Name:ha-217000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:07:11.801402    3113 status.go:255] checking status of ha-217000-m03 ...
	I0731 10:07:11.801537    3113 status.go:330] ha-217000-m03 host status = "Stopped" (err=<nil>)
	I0731 10:07:11.801541    3113 status.go:343] host is not running, skipping remaining checks
	I0731 10:07:11.801544    3113 status.go:257] ha-217000-m03 status: &{Name:ha-217000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 10:07:11.801549    3113 status.go:255] checking status of ha-217000-m04 ...
	I0731 10:07:11.801674    3113 status.go:330] ha-217000-m04 host status = "Stopped" (err=<nil>)
	I0731 10:07:11.801679    3113 status.go:343] host is not running, skipping remaining checks
	I0731 10:07:11.801681    3113 status.go:257] ha-217000-m04 status: &{Name:ha-217000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr": ha-217000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr": ha-217000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr": ha-217000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-217000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 7 (32.239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (153.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-217000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-217000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.178290084s)

                                                
                                                
-- stdout --
	* [ha-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-217000" primary control-plane node in "ha-217000" cluster
	* Restarting existing qemu2 VM for "ha-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:07:11.863188    3117 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:07:11.863332    3117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:07:11.863335    3117 out.go:304] Setting ErrFile to fd 2...
	I0731 10:07:11.863344    3117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:07:11.863464    3117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:07:11.864464    3117 out.go:298] Setting JSON to false
	I0731 10:07:11.880495    3117 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2195,"bootTime":1722443436,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:07:11.880566    3117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:07:11.885813    3117 out.go:177] * [ha-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:07:11.892758    3117 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:07:11.892823    3117 notify.go:220] Checking for updates...
	I0731 10:07:11.900700    3117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:07:11.903742    3117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:07:11.906735    3117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:07:11.909777    3117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:07:11.912722    3117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:07:11.916072    3117 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:07:11.916333    3117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:07:11.920708    3117 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:07:11.927764    3117 start.go:297] selected driver: qemu2
	I0731 10:07:11.927769    3117 start.go:901] validating driver "qemu2" against &{Name:ha-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-217000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:07:11.927860    3117 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:07:11.930314    3117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:07:11.930360    3117 cni.go:84] Creating CNI manager for ""
	I0731 10:07:11.930366    3117 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 10:07:11.930422    3117 start.go:340] cluster config:
	{Name:ha-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-217000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:07:11.934088    3117 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:07:11.942729    3117 out.go:177] * Starting "ha-217000" primary control-plane node in "ha-217000" cluster
	I0731 10:07:11.945781    3117 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:07:11.945799    3117 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:07:11.945809    3117 cache.go:56] Caching tarball of preloaded images
	I0731 10:07:11.945871    3117 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:07:11.945876    3117 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:07:11.945942    3117 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/ha-217000/config.json ...
	I0731 10:07:11.946349    3117 start.go:360] acquireMachinesLock for ha-217000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:07:11.946383    3117 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "ha-217000"
	I0731 10:07:11.946391    3117 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:07:11.946398    3117 fix.go:54] fixHost starting: 
	I0731 10:07:11.946516    3117 fix.go:112] recreateIfNeeded on ha-217000: state=Stopped err=<nil>
	W0731 10:07:11.946524    3117 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:07:11.949789    3117 out.go:177] * Restarting existing qemu2 VM for "ha-217000" ...
	I0731 10:07:11.956734    3117 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:07:11.956769    3117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:58:d9:d5:a4:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/disk.qcow2
	I0731 10:07:11.958682    3117 main.go:141] libmachine: STDOUT: 
	I0731 10:07:11.958702    3117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:07:11.958732    3117 fix.go:56] duration metric: took 12.334209ms for fixHost
	I0731 10:07:11.958737    3117 start.go:83] releasing machines lock for "ha-217000", held for 12.349875ms
	W0731 10:07:11.958744    3117 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:07:11.958789    3117 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:07:11.958794    3117 start.go:729] Will try again in 5 seconds ...
	I0731 10:07:16.960869    3117 start.go:360] acquireMachinesLock for ha-217000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:07:16.961260    3117 start.go:364] duration metric: took 299.375µs to acquireMachinesLock for "ha-217000"
	I0731 10:07:16.961385    3117 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:07:16.961406    3117 fix.go:54] fixHost starting: 
	I0731 10:07:16.962052    3117 fix.go:112] recreateIfNeeded on ha-217000: state=Stopped err=<nil>
	W0731 10:07:16.962079    3117 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:07:16.970431    3117 out.go:177] * Restarting existing qemu2 VM for "ha-217000" ...
	I0731 10:07:16.974508    3117 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:07:16.974857    3117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:58:d9:d5:a4:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/ha-217000/disk.qcow2
	I0731 10:07:16.983567    3117 main.go:141] libmachine: STDOUT: 
	I0731 10:07:16.983624    3117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:07:16.983684    3117 fix.go:56] duration metric: took 22.280042ms for fixHost
	I0731 10:07:16.983707    3117 start.go:83] releasing machines lock for "ha-217000", held for 22.424333ms
	W0731 10:07:16.983855    3117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:07:16.991484    3117 out.go:177] 
	W0731 10:07:16.994599    3117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:07:16.994638    3117 out.go:239] * 
	* 
	W0731 10:07:16.997348    3117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:07:17.005423    3117 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-217000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 7 (68.223917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-217000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-217000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-217000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-217000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 7 (29.5245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-217000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-217000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.066834ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-217000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-217000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:07:17.190416    3132 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:07:17.190559    3132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:07:17.190562    3132 out.go:304] Setting ErrFile to fd 2...
	I0731 10:07:17.190564    3132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:07:17.190681    3132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:07:17.190923    3132 mustload.go:65] Loading cluster: ha-217000
	I0731 10:07:17.191158    3132 config.go:182] Loaded profile config "ha-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 10:07:17.191456    3132 out.go:239] ! The control-plane node ha-217000 host is not running (will try others): state=Stopped
	! The control-plane node ha-217000 host is not running (will try others): state=Stopped
	W0731 10:07:17.191555    3132 out.go:239] ! The control-plane node ha-217000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-217000-m02 host is not running (will try others): state=Stopped
	I0731 10:07:17.195812    3132 out.go:177] * The control-plane node ha-217000-m03 host is not running: state=Stopped
	I0731 10:07:17.199853    3132 out.go:177]   To start a cluster, run: "minikube start -p ha-217000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-217000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-217000 -n ha-217000: exit status 7 (29.998166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-786000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-786000 --driver=qemu2 : exit status 80 (10.035346s)

                                                
                                                
-- stdout --
	* [image-786000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-786000" primary control-plane node in "image-786000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-786000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-786000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-786000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-786000 -n image-786000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-786000 -n image-786000: exit status 7 (67.074291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-786000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-565000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-565000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.868531958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"42a45d6e-731e-443d-88f6-66f8e593337e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-565000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"09374d98-4ece-49a7-b965-c94f3f43bb2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19349"}}
	{"specversion":"1.0","id":"e8c047de-a097-4feb-b5a9-8f39130759cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig"}}
	{"specversion":"1.0","id":"00604a7f-0038-42a1-8e9e-ee03b3e147b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2a27080d-591b-4f20-966f-c58fc6d24259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"81ca8b43-19c2-4845-a960-4a0f0ffe05be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube"}}
	{"specversion":"1.0","id":"03009520-9b2a-4442-aa72-f6cd824f3d0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"66abe5b8-474f-4e71-a251-345af564a0ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"907569de-2b62-4f3a-8a21-34b3e006fdc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7c2b5266-3f28-4657-a4af-115c0a6c6e76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-565000\" primary control-plane node in \"json-output-565000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8de98d14-93e6-429b-9600-7878ff9bd22f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4968325b-5873-408b-bd1c-d694b55dc0cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-565000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f21de79f-b239-465a-b0db-c10db7a1fdea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"4c081b12-aad6-477b-988f-191d92b7c059","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4d796d4b-5ad1-4dd2-94bd-4a1fd8b6f351","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-565000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"2a52a6dd-e058-4a18-9f6d-cec2e3974fea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"6626e018-3edd-4988-86fd-9550a041dccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-565000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-565000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-565000 --output=json --user=testUser: exit status 83 (76.053708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"422e90d6-5436-4d8d-a3ec-b3b33cd88f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-565000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"a8e482cf-d693-4ac2-965a-06ae876f055b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-565000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-565000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-565000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-565000 --output=json --user=testUser: exit status 83 (43.947208ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-565000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-565000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-565000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-565000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-917000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-917000 --driver=qemu2 : exit status 80 (9.792134792s)

                                                
                                                
-- stdout --
	* [first-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-917000" primary control-plane node in "first-917000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-917000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-917000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 10:07:49.970511 -0700 PDT m=+1724.937363168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-919000 -n second-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-919000 -n second-919000: exit status 85 (81.368083ms)

                                                
                                                
-- stdout --
	* Profile "second-919000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-919000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-919000" host is not running, skipping log retrieval (state="* Profile \"second-919000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-919000\"")
helpers_test.go:175: Cleaning up "second-919000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-919000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 10:07:50.157753 -0700 PDT m=+1725.124605626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-917000 -n first-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-917000 -n first-917000: exit status 7 (29.496167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-917000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-917000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-917000
--- FAIL: TestMinikubeProfile (10.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-881000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-881000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.848251791s)

                                                
                                                
-- stdout --
	* [mount-start-1-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-881000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-881000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-881000 -n mount-start-1-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-881000 -n mount-start-1-881000: exit status 7 (70.16975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-759000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-759000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.936806042s)

                                                
                                                
-- stdout --
	* [multinode-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-759000" primary control-plane node in "multinode-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:08:00.390133    3271 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:08:00.390267    3271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:08:00.390271    3271 out.go:304] Setting ErrFile to fd 2...
	I0731 10:08:00.390273    3271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:08:00.390416    3271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:08:00.391484    3271 out.go:298] Setting JSON to false
	I0731 10:08:00.407460    3271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2244,"bootTime":1722443436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:08:00.407530    3271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:08:00.412942    3271 out.go:177] * [multinode-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:08:00.419888    3271 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:08:00.419952    3271 notify.go:220] Checking for updates...
	I0731 10:08:00.426893    3271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:08:00.429761    3271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:08:00.432884    3271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:08:00.435957    3271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:08:00.437340    3271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:08:00.441091    3271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:08:00.444905    3271 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:08:00.453908    3271 start.go:297] selected driver: qemu2
	I0731 10:08:00.453915    3271 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:08:00.453921    3271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:08:00.456244    3271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:08:00.458876    3271 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:08:00.460162    3271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:08:00.460230    3271 cni.go:84] Creating CNI manager for ""
	I0731 10:08:00.460236    3271 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 10:08:00.460240    3271 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 10:08:00.460269    3271 start.go:340] cluster config:
	{Name:multinode-759000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:08:00.464001    3271 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:08:00.470911    3271 out.go:177] * Starting "multinode-759000" primary control-plane node in "multinode-759000" cluster
	I0731 10:08:00.474793    3271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:08:00.474813    3271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:08:00.474834    3271 cache.go:56] Caching tarball of preloaded images
	I0731 10:08:00.474901    3271 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:08:00.474907    3271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:08:00.475112    3271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/multinode-759000/config.json ...
	I0731 10:08:00.475126    3271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/multinode-759000/config.json: {Name:mk835b229b0e289d7650726319bcaddf5fe6a860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:08:00.475455    3271 start.go:360] acquireMachinesLock for multinode-759000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:08:00.475491    3271 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "multinode-759000"
	I0731 10:08:00.475501    3271 start.go:93] Provisioning new machine with config: &{Name:multinode-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:08:00.475542    3271 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:08:00.482893    3271 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:08:00.500357    3271 start.go:159] libmachine.API.Create for "multinode-759000" (driver="qemu2")
	I0731 10:08:00.500391    3271 client.go:168] LocalClient.Create starting
	I0731 10:08:00.500466    3271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:08:00.500498    3271 main.go:141] libmachine: Decoding PEM data...
	I0731 10:08:00.500510    3271 main.go:141] libmachine: Parsing certificate...
	I0731 10:08:00.500564    3271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:08:00.500588    3271 main.go:141] libmachine: Decoding PEM data...
	I0731 10:08:00.500595    3271 main.go:141] libmachine: Parsing certificate...
	I0731 10:08:00.500993    3271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:08:00.653645    3271 main.go:141] libmachine: Creating SSH key...
	I0731 10:08:00.801277    3271 main.go:141] libmachine: Creating Disk image...
	I0731 10:08:00.801285    3271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:08:00.801471    3271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:08:00.811132    3271 main.go:141] libmachine: STDOUT: 
	I0731 10:08:00.811148    3271 main.go:141] libmachine: STDERR: 
	I0731 10:08:00.811208    3271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2 +20000M
	I0731 10:08:00.819065    3271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:08:00.819081    3271 main.go:141] libmachine: STDERR: 
	I0731 10:08:00.819094    3271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:08:00.819099    3271 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:08:00.819111    3271 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:08:00.819148    3271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:37:92:41:90:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:08:00.820945    3271 main.go:141] libmachine: STDOUT: 
	I0731 10:08:00.820958    3271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:08:00.820975    3271 client.go:171] duration metric: took 320.583958ms to LocalClient.Create
	I0731 10:08:02.823118    3271 start.go:128] duration metric: took 2.347592459s to createHost
	I0731 10:08:02.823202    3271 start.go:83] releasing machines lock for "multinode-759000", held for 2.347742625s
	W0731 10:08:02.823261    3271 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:08:02.834409    3271 out.go:177] * Deleting "multinode-759000" in qemu2 ...
	W0731 10:08:02.865074    3271 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:08:02.865100    3271 start.go:729] Will try again in 5 seconds ...
	I0731 10:08:07.867236    3271 start.go:360] acquireMachinesLock for multinode-759000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:08:07.867717    3271 start.go:364] duration metric: took 344.291µs to acquireMachinesLock for "multinode-759000"
	I0731 10:08:07.867832    3271 start.go:93] Provisioning new machine with config: &{Name:multinode-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:08:07.868102    3271 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:08:07.883923    3271 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:08:07.933733    3271 start.go:159] libmachine.API.Create for "multinode-759000" (driver="qemu2")
	I0731 10:08:07.933784    3271 client.go:168] LocalClient.Create starting
	I0731 10:08:07.933898    3271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:08:07.933965    3271 main.go:141] libmachine: Decoding PEM data...
	I0731 10:08:07.933981    3271 main.go:141] libmachine: Parsing certificate...
	I0731 10:08:07.934046    3271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:08:07.934089    3271 main.go:141] libmachine: Decoding PEM data...
	I0731 10:08:07.934099    3271 main.go:141] libmachine: Parsing certificate...
	I0731 10:08:07.934609    3271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:08:08.095601    3271 main.go:141] libmachine: Creating SSH key...
	I0731 10:08:08.232874    3271 main.go:141] libmachine: Creating Disk image...
	I0731 10:08:08.232880    3271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:08:08.233051    3271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:08:08.242540    3271 main.go:141] libmachine: STDOUT: 
	I0731 10:08:08.242554    3271 main.go:141] libmachine: STDERR: 
	I0731 10:08:08.242612    3271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2 +20000M
	I0731 10:08:08.250413    3271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:08:08.250428    3271 main.go:141] libmachine: STDERR: 
	I0731 10:08:08.250439    3271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:08:08.250443    3271 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:08:08.250470    3271 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:08:08.250498    3271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:88:54:4d:77:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:08:08.252151    3271 main.go:141] libmachine: STDOUT: 
	I0731 10:08:08.252166    3271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:08:08.252177    3271 client.go:171] duration metric: took 318.3925ms to LocalClient.Create
	I0731 10:08:10.254324    3271 start.go:128] duration metric: took 2.386233917s to createHost
	I0731 10:08:10.254391    3271 start.go:83] releasing machines lock for "multinode-759000", held for 2.3866865s
	W0731 10:08:10.254723    3271 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:08:10.265261    3271 out.go:177] 
	W0731 10:08:10.273383    3271 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:08:10.273414    3271 out.go:239] * 
	* 
	W0731 10:08:10.276165    3271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:08:10.285111    3271 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-759000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (67.75675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (119.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.334ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-759000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- rollout status deployment/busybox: exit status 1 (55.978125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.577375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.590042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.802417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0731 10:08:14.288414    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.136958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.307958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.690958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.934625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.601958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.262167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.318208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0731 10:09:37.353887    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.867709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.039041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.335834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.767084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.913208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (29.293708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (119.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-759000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.842916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (29.041834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-759000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-759000 -v 3 --alsologtostderr: exit status 83 (41.34125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-759000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:10.396477    3363 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:10.396658    3363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.396661    3363 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:10.396664    3363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.396793    3363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:10.397052    3363 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:10.397237    3363 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:10.401760    3363 out.go:177] * The control-plane node multinode-759000 host is not running: state=Stopped
	I0731 10:10:10.405684    3363 out.go:177]   To start a cluster, run: "minikube start -p multinode-759000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-759000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (28.629917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-759000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-759000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.235542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-759000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-759000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-759000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (29.83275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-759000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-759000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-759000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-759000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (29.068167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status --output json --alsologtostderr: exit status 7 (29.59925ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-759000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:10.600847    3375 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:10.600991    3375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.600994    3375 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:10.600997    3375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.601126    3375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:10.601251    3375 out.go:298] Setting JSON to true
	I0731 10:10:10.601260    3375 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:10.601328    3375 notify.go:220] Checking for updates...
	I0731 10:10:10.601454    3375 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:10.601464    3375 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:10.601667    3375 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:10.601671    3375 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:10.601674    3375 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-759000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (28.880709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 node stop m03: exit status 85 (47.48075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-759000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status: exit status 7 (30.020292ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr: exit status 7 (28.691334ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:10.736716    3383 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:10.736864    3383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.736870    3383 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:10.736872    3383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.737012    3383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:10.737126    3383 out.go:298] Setting JSON to false
	I0731 10:10:10.737135    3383 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:10.737199    3383 notify.go:220] Checking for updates...
	I0731 10:10:10.737323    3383 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:10.737329    3383 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:10.737539    3383 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:10.737543    3383 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:10.737546    3383 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr": multinode-759000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (28.600375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.143417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:10.794914    3387 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:10.795126    3387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.795130    3387 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:10.795132    3387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.795270    3387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:10.795511    3387 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:10.795691    3387 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:10.799729    3387 out.go:177] 
	W0731 10:10:10.803725    3387 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0731 10:10:10.803729    3387 out.go:239] * 
	* 
	W0731 10:10:10.805366    3387 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:10:10.808654    3387 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0731 10:10:10.794914    3387 out.go:291] Setting OutFile to fd 1 ...
I0731 10:10:10.795126    3387 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 10:10:10.795130    3387 out.go:304] Setting ErrFile to fd 2...
I0731 10:10:10.795132    3387 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 10:10:10.795270    3387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
I0731 10:10:10.795511    3387 mustload.go:65] Loading cluster: multinode-759000
I0731 10:10:10.795691    3387 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 10:10:10.799729    3387 out.go:177] 
W0731 10:10:10.803725    3387 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0731 10:10:10.803729    3387 out.go:239] * 
* 
W0731 10:10:10.805366    3387 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 10:10:10.808654    3387 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-759000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (29.906916ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:10.841861    3389 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:10.841997    3389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.842004    3389 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:10.842006    3389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:10.842120    3389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:10.842232    3389 out.go:298] Setting JSON to false
	I0731 10:10:10.842241    3389 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:10.842307    3389 notify.go:220] Checking for updates...
	I0731 10:10:10.842435    3389 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:10.842445    3389 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:10.842658    3389 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:10.842662    3389 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:10.842664    3389 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (76.5595ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:11.567551    3391 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:11.567769    3391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:11.567773    3391 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:11.567776    3391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:11.567950    3391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:11.568110    3391 out.go:298] Setting JSON to false
	I0731 10:10:11.568123    3391 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:11.568160    3391 notify.go:220] Checking for updates...
	I0731 10:10:11.568388    3391 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:11.568397    3391 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:11.568695    3391 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:11.568700    3391 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:11.568703    3391 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (74.691375ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:12.752951    3393 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:12.753180    3393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:12.753185    3393 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:12.753189    3393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:12.753370    3393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:12.753543    3393 out.go:298] Setting JSON to false
	I0731 10:10:12.753556    3393 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:12.753603    3393 notify.go:220] Checking for updates...
	I0731 10:10:12.753838    3393 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:12.753848    3393 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:12.754136    3393 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:12.754141    3393 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:12.754148    3393 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (72.206166ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:15.494110    3395 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:15.494289    3395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:15.494294    3395 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:15.494297    3395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:15.494477    3395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:15.494622    3395 out.go:298] Setting JSON to false
	I0731 10:10:15.494636    3395 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:15.494695    3395 notify.go:220] Checking for updates...
	I0731 10:10:15.494894    3395 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:15.494902    3395 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:15.495176    3395 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:15.495181    3395 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:15.495184    3395 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (75.837542ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:17.922939    3397 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:17.923126    3397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:17.923131    3397 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:17.923134    3397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:17.923305    3397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:17.923455    3397 out.go:298] Setting JSON to false
	I0731 10:10:17.923467    3397 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:17.923503    3397 notify.go:220] Checking for updates...
	I0731 10:10:17.923742    3397 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:17.923751    3397 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:17.924019    3397 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:17.924024    3397 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:17.924027    3397 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (72.428125ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:24.615125    3399 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:24.615337    3399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:24.615342    3399 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:24.615345    3399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:24.615528    3399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:24.615692    3399 out.go:298] Setting JSON to false
	I0731 10:10:24.615703    3399 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:24.615742    3399 notify.go:220] Checking for updates...
	I0731 10:10:24.615952    3399 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:24.615961    3399 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:24.616236    3399 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:24.616241    3399 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:24.616244    3399 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (72.43725ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:29.173783    3401 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:29.174000    3401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:29.174004    3401 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:29.174008    3401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:29.174187    3401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:29.174351    3401 out.go:298] Setting JSON to false
	I0731 10:10:29.174363    3401 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:29.174415    3401 notify.go:220] Checking for updates...
	I0731 10:10:29.174676    3401 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:29.174685    3401 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:29.174981    3401 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:29.174987    3401 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:29.174990    3401 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (72.665417ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:41.363419    3406 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:41.363648    3406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:41.363652    3406 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:41.363655    3406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:41.363852    3406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:41.364027    3406 out.go:298] Setting JSON to false
	I0731 10:10:41.364045    3406 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:41.364085    3406 notify.go:220] Checking for updates...
	I0731 10:10:41.364324    3406 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:41.364333    3406 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:41.364632    3406 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:41.364637    3406 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:41.364640    3406 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr: exit status 7 (72.257875ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:10:58.542433    3414 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:10:58.542617    3414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:58.542622    3414 out.go:304] Setting ErrFile to fd 2...
	I0731 10:10:58.542625    3414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:10:58.542786    3414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:10:58.542939    3414 out.go:298] Setting JSON to false
	I0731 10:10:58.542951    3414 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:10:58.542984    3414 notify.go:220] Checking for updates...
	I0731 10:10:58.543209    3414 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:10:58.543222    3414 status.go:255] checking status of multinode-759000 ...
	I0731 10:10:58.543502    3414 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:10:58.543507    3414 status.go:343] host is not running, skipping remaining checks
	I0731 10:10:58.543510    3414 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-759000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (32.315833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-759000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-759000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-759000: (2.942631083s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-759000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-759000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.217031875s)

                                                
                                                
-- stdout --
	* [multinode-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-759000" primary control-plane node in "multinode-759000" cluster
	* Restarting existing qemu2 VM for "multinode-759000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-759000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:11:01.611218    3438 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:11:01.611386    3438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:01.611390    3438 out.go:304] Setting ErrFile to fd 2...
	I0731 10:11:01.611398    3438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:01.611589    3438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:11:01.612823    3438 out.go:298] Setting JSON to false
	I0731 10:11:01.632098    3438 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2425,"bootTime":1722443436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:11:01.632168    3438 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:11:01.636848    3438 out.go:177] * [multinode-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:11:01.644745    3438 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:11:01.644777    3438 notify.go:220] Checking for updates...
	I0731 10:11:01.650143    3438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:11:01.652739    3438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:11:01.655790    3438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:11:01.658791    3438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:11:01.661867    3438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:11:01.665044    3438 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:11:01.665109    3438 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:11:01.669763    3438 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:11:01.676757    3438 start.go:297] selected driver: qemu2
	I0731 10:11:01.676764    3438 start.go:901] validating driver "qemu2" against &{Name:multinode-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:11:01.676824    3438 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:11:01.679236    3438 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:11:01.679288    3438 cni.go:84] Creating CNI manager for ""
	I0731 10:11:01.679293    3438 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 10:11:01.679350    3438 start.go:340] cluster config:
	{Name:multinode-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-759000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:11:01.683353    3438 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:01.690716    3438 out.go:177] * Starting "multinode-759000" primary control-plane node in "multinode-759000" cluster
	I0731 10:11:01.694823    3438 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:11:01.694839    3438 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:11:01.694852    3438 cache.go:56] Caching tarball of preloaded images
	I0731 10:11:01.694925    3438 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:11:01.694931    3438 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:11:01.694992    3438 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/multinode-759000/config.json ...
	I0731 10:11:01.695431    3438 start.go:360] acquireMachinesLock for multinode-759000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:11:01.695466    3438 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "multinode-759000"
	I0731 10:11:01.695475    3438 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:11:01.695481    3438 fix.go:54] fixHost starting: 
	I0731 10:11:01.695599    3438 fix.go:112] recreateIfNeeded on multinode-759000: state=Stopped err=<nil>
	W0731 10:11:01.695607    3438 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:11:01.699811    3438 out.go:177] * Restarting existing qemu2 VM for "multinode-759000" ...
	I0731 10:11:01.706757    3438 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:11:01.706803    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:88:54:4d:77:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:11:01.709019    3438 main.go:141] libmachine: STDOUT: 
	I0731 10:11:01.709038    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:11:01.709066    3438 fix.go:56] duration metric: took 13.585833ms for fixHost
	I0731 10:11:01.709070    3438 start.go:83] releasing machines lock for "multinode-759000", held for 13.59925ms
	W0731 10:11:01.709078    3438 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:11:01.709118    3438 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:11:01.709123    3438 start.go:729] Will try again in 5 seconds ...
	I0731 10:11:06.711302    3438 start.go:360] acquireMachinesLock for multinode-759000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:11:06.711838    3438 start.go:364] duration metric: took 397.625µs to acquireMachinesLock for "multinode-759000"
	I0731 10:11:06.711990    3438 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:11:06.712010    3438 fix.go:54] fixHost starting: 
	I0731 10:11:06.712780    3438 fix.go:112] recreateIfNeeded on multinode-759000: state=Stopped err=<nil>
	W0731 10:11:06.712810    3438 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:11:06.717441    3438 out.go:177] * Restarting existing qemu2 VM for "multinode-759000" ...
	I0731 10:11:06.721311    3438 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:11:06.721549    3438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:88:54:4d:77:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:11:06.730918    3438 main.go:141] libmachine: STDOUT: 
	I0731 10:11:06.730973    3438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:11:06.731034    3438 fix.go:56] duration metric: took 19.026583ms for fixHost
	I0731 10:11:06.731052    3438 start.go:83] releasing machines lock for "multinode-759000", held for 19.16725ms
	W0731 10:11:06.731248    3438 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-759000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-759000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:11:06.738303    3438 out.go:177] 
	W0731 10:11:06.742422    3438 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:11:06.742454    3438 out.go:239] * 
	* 
	W0731 10:11:06.744999    3438 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:11:06.753335    3438 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-759000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-759000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (32.086542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 node delete m03: exit status 83 (38.332959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-759000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-759000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr: exit status 7 (29.73025ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:11:06.933544    3452 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:11:06.933713    3452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:06.933716    3452 out.go:304] Setting ErrFile to fd 2...
	I0731 10:11:06.933719    3452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:06.933883    3452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:11:06.934004    3452 out.go:298] Setting JSON to false
	I0731 10:11:06.934012    3452 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:11:06.934068    3452 notify.go:220] Checking for updates...
	I0731 10:11:06.934196    3452 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:11:06.934202    3452 status.go:255] checking status of multinode-759000 ...
	I0731 10:11:06.934410    3452 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:11:06.934414    3452 status.go:343] host is not running, skipping remaining checks
	I0731 10:11:06.934417    3452 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (29.723333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-759000 stop: (3.238213083s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status: exit status 7 (65.562958ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr: exit status 7 (32.298292ms)

                                                
                                                
-- stdout --
	multinode-759000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:11:10.300054    3476 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:11:10.300192    3476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:10.300195    3476 out.go:304] Setting ErrFile to fd 2...
	I0731 10:11:10.300198    3476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:10.300331    3476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:11:10.300438    3476 out.go:298] Setting JSON to false
	I0731 10:11:10.300447    3476 mustload.go:65] Loading cluster: multinode-759000
	I0731 10:11:10.300519    3476 notify.go:220] Checking for updates...
	I0731 10:11:10.300657    3476 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:11:10.300664    3476 status.go:255] checking status of multinode-759000 ...
	I0731 10:11:10.300859    3476 status.go:330] multinode-759000 host status = "Stopped" (err=<nil>)
	I0731 10:11:10.300863    3476 status.go:343] host is not running, skipping remaining checks
	I0731 10:11:10.300866    3476 status.go:257] multinode-759000 status: &{Name:multinode-759000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr": multinode-759000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-759000 status --alsologtostderr": multinode-759000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (29.576833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-759000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-759000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176849792s)

                                                
                                                
-- stdout --
	* [multinode-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-759000" primary control-plane node in "multinode-759000" cluster
	* Restarting existing qemu2 VM for "multinode-759000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-759000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:11:10.358031    3480 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:11:10.358176    3480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:10.358179    3480 out.go:304] Setting ErrFile to fd 2...
	I0731 10:11:10.358182    3480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:10.358316    3480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:11:10.359295    3480 out.go:298] Setting JSON to false
	I0731 10:11:10.375426    3480 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2434,"bootTime":1722443436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:11:10.375495    3480 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:11:10.380217    3480 out.go:177] * [multinode-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:11:10.387350    3480 notify.go:220] Checking for updates...
	I0731 10:11:10.390098    3480 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:11:10.394255    3480 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:11:10.397321    3480 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:11:10.401212    3480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:11:10.404209    3480 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:11:10.407231    3480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:11:10.410396    3480 config.go:182] Loaded profile config "multinode-759000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:11:10.410671    3480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:11:10.415152    3480 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:11:10.422166    3480 start.go:297] selected driver: qemu2
	I0731 10:11:10.422172    3480 start.go:901] validating driver "qemu2" against &{Name:multinode-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:11:10.422232    3480 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:11:10.424586    3480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:11:10.424609    3480 cni.go:84] Creating CNI manager for ""
	I0731 10:11:10.424613    3480 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 10:11:10.424670    3480 start.go:340] cluster config:
	{Name:multinode-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-759000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:11:10.428352    3480 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:10.435095    3480 out.go:177] * Starting "multinode-759000" primary control-plane node in "multinode-759000" cluster
	I0731 10:11:10.439176    3480 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:11:10.439194    3480 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:11:10.439207    3480 cache.go:56] Caching tarball of preloaded images
	I0731 10:11:10.439261    3480 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:11:10.439268    3480 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:11:10.439329    3480 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/multinode-759000/config.json ...
	I0731 10:11:10.439737    3480 start.go:360] acquireMachinesLock for multinode-759000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:11:10.439764    3480 start.go:364] duration metric: took 21.541µs to acquireMachinesLock for "multinode-759000"
	I0731 10:11:10.439773    3480 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:11:10.439778    3480 fix.go:54] fixHost starting: 
	I0731 10:11:10.439891    3480 fix.go:112] recreateIfNeeded on multinode-759000: state=Stopped err=<nil>
	W0731 10:11:10.439899    3480 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:11:10.443201    3480 out.go:177] * Restarting existing qemu2 VM for "multinode-759000" ...
	I0731 10:11:10.447194    3480 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:11:10.447237    3480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:88:54:4d:77:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:11:10.449256    3480 main.go:141] libmachine: STDOUT: 
	I0731 10:11:10.449273    3480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:11:10.449301    3480 fix.go:56] duration metric: took 9.52375ms for fixHost
	I0731 10:11:10.449305    3480 start.go:83] releasing machines lock for "multinode-759000", held for 9.53625ms
	W0731 10:11:10.449312    3480 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:11:10.449348    3480 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:11:10.449353    3480 start.go:729] Will try again in 5 seconds ...
	I0731 10:11:15.451525    3480 start.go:360] acquireMachinesLock for multinode-759000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:11:15.451894    3480 start.go:364] duration metric: took 300.333µs to acquireMachinesLock for "multinode-759000"
	I0731 10:11:15.452021    3480 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:11:15.452039    3480 fix.go:54] fixHost starting: 
	I0731 10:11:15.452758    3480 fix.go:112] recreateIfNeeded on multinode-759000: state=Stopped err=<nil>
	W0731 10:11:15.452785    3480 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:11:15.456992    3480 out.go:177] * Restarting existing qemu2 VM for "multinode-759000" ...
	I0731 10:11:15.465159    3480 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:11:15.465415    3480 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:88:54:4d:77:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/multinode-759000/disk.qcow2
	I0731 10:11:15.474115    3480 main.go:141] libmachine: STDOUT: 
	I0731 10:11:15.474186    3480 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:11:15.474263    3480 fix.go:56] duration metric: took 22.221625ms for fixHost
	I0731 10:11:15.474282    3480 start.go:83] releasing machines lock for "multinode-759000", held for 22.363041ms
	W0731 10:11:15.474475    3480 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-759000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-759000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:11:15.482092    3480 out.go:177] 
	W0731 10:11:15.486219    3480 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:11:15.486280    3480 out.go:239] * 
	* 
	W0731 10:11:15.489143    3480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:11:15.495117    3480 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-759000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (67.134667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-759000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-759000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-759000-m01 --driver=qemu2 : exit status 80 (9.787248709s)

                                                
                                                
-- stdout --
	* [multinode-759000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-759000-m01" primary control-plane node in "multinode-759000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-759000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-759000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-759000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-759000-m02 --driver=qemu2 : exit status 80 (9.965470166s)

                                                
                                                
-- stdout --
	* [multinode-759000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-759000-m02" primary control-plane node in "multinode-759000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-759000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-759000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-759000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-759000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-759000: exit status 83 (76.960458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-759000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-759000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-759000 -n multinode-759000: exit status 7 (30.220042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-759000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                    
x
+
TestPreload (10.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-453000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-453000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.905799291s)

                                                
                                                
-- stdout --
	* [test-preload-453000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-453000" primary control-plane node in "test-preload-453000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-453000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:11:35.686830    3537 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:11:35.686979    3537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:35.686983    3537 out.go:304] Setting ErrFile to fd 2...
	I0731 10:11:35.686985    3537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:11:35.687101    3537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:11:35.688118    3537 out.go:298] Setting JSON to false
	I0731 10:11:35.704138    3537 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2459,"bootTime":1722443436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:11:35.704207    3537 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:11:35.709606    3537 out.go:177] * [test-preload-453000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:11:35.717591    3537 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:11:35.717649    3537 notify.go:220] Checking for updates...
	I0731 10:11:35.725537    3537 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:11:35.728638    3537 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:11:35.731578    3537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:11:35.734572    3537 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:11:35.737556    3537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:11:35.740736    3537 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:11:35.740788    3537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:11:35.744516    3537 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:11:35.750500    3537 start.go:297] selected driver: qemu2
	I0731 10:11:35.750505    3537 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:11:35.750512    3537 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:11:35.752636    3537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:11:35.755533    3537 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:11:35.758720    3537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:11:35.758738    3537 cni.go:84] Creating CNI manager for ""
	I0731 10:11:35.758744    3537 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:11:35.758748    3537 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:11:35.758775    3537 start.go:340] cluster config:
	{Name:test-preload-453000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-453000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:11:35.762582    3537 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.769524    3537 out.go:177] * Starting "test-preload-453000" primary control-plane node in "test-preload-453000" cluster
	I0731 10:11:35.773618    3537 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0731 10:11:35.773700    3537 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/test-preload-453000/config.json ...
	I0731 10:11:35.773728    3537 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/test-preload-453000/config.json: {Name:mk524aadb254db323cdae86674fa829842a91c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:11:35.773727    3537 cache.go:107] acquiring lock: {Name:mkad15f9c461089697d77e4b9d4adcaf77e2ae5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.773738    3537 cache.go:107] acquiring lock: {Name:mk48f8ef768db3d0a1caa01f5356c0d6aeb7c7f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.773770    3537 cache.go:107] acquiring lock: {Name:mk3710093351fc5999db5052ff87e7554be2f14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.773935    3537 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 10:11:35.773736    3537 cache.go:107] acquiring lock: {Name:mkb98b90252efc2f55a417da1b409509b65c9f10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.773970    3537 cache.go:107] acquiring lock: {Name:mkb4f80768eb568f8520f626605af63dd30a377d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.773998    3537 cache.go:107] acquiring lock: {Name:mk231186c6ce230a79466e0093bfb98d89c8d1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.774025    3537 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 10:11:35.774015    3537 cache.go:107] acquiring lock: {Name:mk3d991f056c59c4c699d11173ca2c3084b98be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.774098    3537 start.go:360] acquireMachinesLock for test-preload-453000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:11:35.774105    3537 cache.go:107] acquiring lock: {Name:mkd020bc9b33232d6572b93b4f7c9d015b504676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:11:35.774125    3537 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:11:35.774143    3537 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 10:11:35.774245    3537 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 10:11:35.774261    3537 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 10:11:35.774267    3537 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:11:35.774143    3537 start.go:364] duration metric: took 36.083µs to acquireMachinesLock for "test-preload-453000"
	I0731 10:11:35.774345    3537 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:11:35.774318    3537 start.go:93] Provisioning new machine with config: &{Name:test-preload-453000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-453000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:11:35.774397    3537 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:11:35.782540    3537 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:11:35.787633    3537 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 10:11:35.787653    3537 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 10:11:35.787688    3537 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:11:35.787742    3537 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 10:11:35.789537    3537 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 10:11:35.789712    3537 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 10:11:35.789763    3537 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:11:35.789940    3537 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:11:35.800842    3537 start.go:159] libmachine.API.Create for "test-preload-453000" (driver="qemu2")
	I0731 10:11:35.800863    3537 client.go:168] LocalClient.Create starting
	I0731 10:11:35.800937    3537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:11:35.800971    3537 main.go:141] libmachine: Decoding PEM data...
	I0731 10:11:35.800981    3537 main.go:141] libmachine: Parsing certificate...
	I0731 10:11:35.801021    3537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:11:35.801044    3537 main.go:141] libmachine: Decoding PEM data...
	I0731 10:11:35.801052    3537 main.go:141] libmachine: Parsing certificate...
	I0731 10:11:35.801412    3537 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:11:35.952749    3537 main.go:141] libmachine: Creating SSH key...
	I0731 10:11:36.043208    3537 main.go:141] libmachine: Creating Disk image...
	I0731 10:11:36.043232    3537 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:11:36.043405    3537 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2
	I0731 10:11:36.053278    3537 main.go:141] libmachine: STDOUT: 
	I0731 10:11:36.053300    3537 main.go:141] libmachine: STDERR: 
	I0731 10:11:36.053354    3537 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2 +20000M
	I0731 10:11:36.062556    3537 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:11:36.062575    3537 main.go:141] libmachine: STDERR: 
	I0731 10:11:36.062588    3537 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2
	I0731 10:11:36.062593    3537 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:11:36.062604    3537 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:11:36.062631    3537 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:67:d6:5f:13:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2
	I0731 10:11:36.064842    3537 main.go:141] libmachine: STDOUT: 
	I0731 10:11:36.064863    3537 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:11:36.064883    3537 client.go:171] duration metric: took 264.019334ms to LocalClient.Create
	I0731 10:11:36.187976    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 10:11:36.219352    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0731 10:11:36.237955    3537 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 10:11:36.237974    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 10:11:36.278395    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 10:11:36.294399    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 10:11:36.321392    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 10:11:36.360328    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 10:11:36.438867    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0731 10:11:36.438912    3537 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 665.15725ms
	I0731 10:11:36.438944    3537 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0731 10:11:36.844159    3537 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 10:11:36.844259    3537 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 10:11:37.062151    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 10:11:37.062205    3537 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.288486125s
	I0731 10:11:37.062233    3537 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 10:11:37.651494    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0731 10:11:37.651544    3537 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.877632s
	I0731 10:11:37.651593    3537 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0731 10:11:38.065104    3537 start.go:128] duration metric: took 2.290724875s to createHost
	I0731 10:11:38.065157    3537 start.go:83] releasing machines lock for "test-preload-453000", held for 2.290898208s
	W0731 10:11:38.065220    3537 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:11:38.076380    3537 out.go:177] * Deleting "test-preload-453000" in qemu2 ...
	W0731 10:11:38.104730    3537 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:11:38.104755    3537 start.go:729] Will try again in 5 seconds ...
	I0731 10:11:38.318447    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0731 10:11:38.318489    3537 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.544625375s
	I0731 10:11:38.318515    3537 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0731 10:11:39.476015    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0731 10:11:39.476063    3537 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 3.702178459s
	I0731 10:11:39.476091    3537 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0731 10:11:41.057371    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0731 10:11:41.057443    3537 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.283799542s
	I0731 10:11:41.057491    3537 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0731 10:11:41.138088    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0731 10:11:41.138136    3537 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.364506417s
	I0731 10:11:41.138158    3537 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0731 10:11:43.105166    3537 start.go:360] acquireMachinesLock for test-preload-453000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:11:43.105629    3537 start.go:364] duration metric: took 397.292µs to acquireMachinesLock for "test-preload-453000"
	I0731 10:11:43.105766    3537 start.go:93] Provisioning new machine with config: &{Name:test-preload-453000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-453000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:11:43.106040    3537 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:11:43.114439    3537 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:11:43.164585    3537 start.go:159] libmachine.API.Create for "test-preload-453000" (driver="qemu2")
	I0731 10:11:43.164625    3537 client.go:168] LocalClient.Create starting
	I0731 10:11:43.164740    3537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:11:43.164804    3537 main.go:141] libmachine: Decoding PEM data...
	I0731 10:11:43.164825    3537 main.go:141] libmachine: Parsing certificate...
	I0731 10:11:43.164891    3537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:11:43.164934    3537 main.go:141] libmachine: Decoding PEM data...
	I0731 10:11:43.164950    3537 main.go:141] libmachine: Parsing certificate...
	I0731 10:11:43.165494    3537 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:11:43.327928    3537 main.go:141] libmachine: Creating SSH key...
	I0731 10:11:43.494611    3537 main.go:141] libmachine: Creating Disk image...
	I0731 10:11:43.494619    3537 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:11:43.494818    3537 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2
	I0731 10:11:43.504525    3537 main.go:141] libmachine: STDOUT: 
	I0731 10:11:43.504545    3537 main.go:141] libmachine: STDERR: 
	I0731 10:11:43.504601    3537 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2 +20000M
	I0731 10:11:43.512574    3537 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:11:43.512590    3537 main.go:141] libmachine: STDERR: 
	I0731 10:11:43.512602    3537 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2
	I0731 10:11:43.512610    3537 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:11:43.512622    3537 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:11:43.512650    3537 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:0d:f9:7f:e5:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/test-preload-453000/disk.qcow2
	I0731 10:11:43.514500    3537 main.go:141] libmachine: STDOUT: 
	I0731 10:11:43.514520    3537 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:11:43.514532    3537 client.go:171] duration metric: took 349.90725ms to LocalClient.Create
	I0731 10:11:44.882457    3537 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0731 10:11:44.882536    3537 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.10858775s
	I0731 10:11:44.882570    3537 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0731 10:11:44.882638    3537 cache.go:87] Successfully saved all images to host disk.
	I0731 10:11:45.516707    3537 start.go:128] duration metric: took 2.410670209s to createHost
	I0731 10:11:45.516823    3537 start.go:83] releasing machines lock for "test-preload-453000", held for 2.411126s
	W0731 10:11:45.517110    3537 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-453000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-453000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:11:45.531645    3537 out.go:177] 
	W0731 10:11:45.535574    3537 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:11:45.535598    3537 out.go:239] * 
	* 
	W0731 10:11:45.537922    3537 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:11:45.551553    3537 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-453000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-31 10:11:45.569713 -0700 PDT m=+1960.540637710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-453000 -n test-preload-453000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-453000 -n test-preload-453000: exit status 7 (68.508208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-453000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-453000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-453000
--- FAIL: TestPreload (10.06s)

                                                
                                    
x
+
TestScheduledStopUnix (10.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-217000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-217000 --memory=2048 --driver=qemu2 : exit status 80 (9.964337208s)

                                                
                                                
-- stdout --
	* [scheduled-stop-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-217000" primary control-plane node in "scheduled-stop-217000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-217000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-217000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-217000" primary control-plane node in "scheduled-stop-217000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-217000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-217000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-31 10:11:55.683011 -0700 PDT m=+1970.654110043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-217000 -n scheduled-stop-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-217000 -n scheduled-stop-217000: exit status 7 (66.960416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-217000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-217000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-217000
--- FAIL: TestScheduledStopUnix (10.11s)

                                                
                                    
x
+
TestSkaffold (12.24s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3089682237 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3089682237 version: (1.069445s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-968000 --memory=2600 --driver=qemu2 
E0731 10:12:03.517484    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-968000 --memory=2600 --driver=qemu2 : exit status 80 (9.75608325s)

                                                
                                                
-- stdout --
	* [skaffold-968000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-968000" primary control-plane node in "skaffold-968000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-968000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-968000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-968000" primary control-plane node in "skaffold-968000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-968000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-31 10:12:07.920721 -0700 PDT m=+1982.892031668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-968000 -n skaffold-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-968000 -n skaffold-968000: exit status 7 (62.2975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-968000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-968000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-968000
--- FAIL: TestSkaffold (12.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (590.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2888404988 start -p running-upgrade-323000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2888404988 start -p running-upgrade-323000 --memory=2200 --vm-driver=qemu2 : (41.246229667s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-323000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0731 10:13:14.283252    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-323000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.674538458s)

                                                
                                                
-- stdout --
	* [running-upgrade-323000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-323000" primary control-plane node in "running-upgrade-323000" cluster
	* Updating the running qemu2 "running-upgrade-323000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:13:13.284825    3839 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:13:13.284963    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:13:13.284967    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:13:13.284969    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:13:13.285093    3839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:13:13.286163    3839 out.go:298] Setting JSON to false
	I0731 10:13:13.302965    3839 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2557,"bootTime":1722443436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:13:13.303040    3839 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:13:13.308471    3839 out.go:177] * [running-upgrade-323000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:13:13.315661    3839 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:13:13.315707    3839 notify.go:220] Checking for updates...
	I0731 10:13:13.323502    3839 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:13:13.327563    3839 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:13:13.330548    3839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:13:13.333588    3839 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:13:13.336665    3839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:13:13.339908    3839 config.go:182] Loaded profile config "running-upgrade-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:13:13.343560    3839 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 10:13:13.346611    3839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:13:13.350579    3839 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:13:13.357624    3839 start.go:297] selected driver: qemu2
	I0731 10:13:13.357631    3839 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:13:13.357687    3839 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:13:13.360043    3839 cni.go:84] Creating CNI manager for ""
	I0731 10:13:13.360061    3839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:13:13.360085    3839 start.go:340] cluster config:
	{Name:running-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:13:13.360135    3839 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:13:13.367568    3839 out.go:177] * Starting "running-upgrade-323000" primary control-plane node in "running-upgrade-323000" cluster
	I0731 10:13:13.371569    3839 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 10:13:13.371583    3839 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 10:13:13.371589    3839 cache.go:56] Caching tarball of preloaded images
	I0731 10:13:13.371633    3839 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:13:13.371638    3839 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 10:13:13.371685    3839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/config.json ...
	I0731 10:13:13.372089    3839 start.go:360] acquireMachinesLock for running-upgrade-323000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:13:15.614681    3839 start.go:364] duration metric: took 2.242618s to acquireMachinesLock for "running-upgrade-323000"
	I0731 10:13:15.614722    3839 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:13:15.614732    3839 fix.go:54] fixHost starting: 
	I0731 10:13:15.615510    3839 fix.go:112] recreateIfNeeded on running-upgrade-323000: state=Running err=<nil>
	W0731 10:13:15.615520    3839 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:13:15.620710    3839 out.go:177] * Updating the running qemu2 "running-upgrade-323000" VM ...
	I0731 10:13:15.624644    3839 machine.go:94] provisionDockerMachine start ...
	I0731 10:13:15.624694    3839 main.go:141] libmachine: Using SSH client type: native
	I0731 10:13:15.624815    3839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046d6a10] 0x1046d9270 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0731 10:13:15.624820    3839 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 10:13:15.686293    3839 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-323000
	
	I0731 10:13:15.686310    3839 buildroot.go:166] provisioning hostname "running-upgrade-323000"
	I0731 10:13:15.686378    3839 main.go:141] libmachine: Using SSH client type: native
	I0731 10:13:15.686499    3839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046d6a10] 0x1046d9270 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0731 10:13:15.686504    3839 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-323000 && echo "running-upgrade-323000" | sudo tee /etc/hostname
	I0731 10:13:15.752355    3839 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-323000
	
	I0731 10:13:15.752428    3839 main.go:141] libmachine: Using SSH client type: native
	I0731 10:13:15.752545    3839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046d6a10] 0x1046d9270 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0731 10:13:15.752554    3839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-323000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-323000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-323000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 10:13:15.815869    3839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 10:13:15.815882    3839 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19349-1049/.minikube CaCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19349-1049/.minikube}
	I0731 10:13:15.815894    3839 buildroot.go:174] setting up certificates
	I0731 10:13:15.815898    3839 provision.go:84] configureAuth start
	I0731 10:13:15.815907    3839 provision.go:143] copyHostCerts
	I0731 10:13:15.815978    3839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem, removing ...
	I0731 10:13:15.815984    3839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem
	I0731 10:13:15.816301    3839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem (1082 bytes)
	I0731 10:13:15.816490    3839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem, removing ...
	I0731 10:13:15.816495    3839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem
	I0731 10:13:15.816547    3839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem (1123 bytes)
	I0731 10:13:15.816644    3839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem, removing ...
	I0731 10:13:15.816649    3839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem
	I0731 10:13:15.816689    3839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem (1679 bytes)
	I0731 10:13:15.816768    3839 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-323000 san=[127.0.0.1 localhost minikube running-upgrade-323000]
	I0731 10:13:15.922739    3839 provision.go:177] copyRemoteCerts
	I0731 10:13:15.922788    3839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 10:13:15.922798    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	I0731 10:13:15.957812    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 10:13:15.965655    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 10:13:15.973989    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 10:13:15.982609    3839 provision.go:87] duration metric: took 166.705ms to configureAuth
	I0731 10:13:15.982623    3839 buildroot.go:189] setting minikube options for container-runtime
	I0731 10:13:15.982746    3839 config.go:182] Loaded profile config "running-upgrade-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:13:15.982784    3839 main.go:141] libmachine: Using SSH client type: native
	I0731 10:13:15.982880    3839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046d6a10] 0x1046d9270 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0731 10:13:15.982885    3839 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 10:13:16.045441    3839 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 10:13:16.045451    3839 buildroot.go:70] root file system type: tmpfs
	I0731 10:13:16.045513    3839 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 10:13:16.045563    3839 main.go:141] libmachine: Using SSH client type: native
	I0731 10:13:16.045679    3839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046d6a10] 0x1046d9270 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0731 10:13:16.045714    3839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 10:13:16.116150    3839 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 10:13:16.116208    3839 main.go:141] libmachine: Using SSH client type: native
	I0731 10:13:16.116327    3839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046d6a10] 0x1046d9270 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0731 10:13:16.116336    3839 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 10:13:16.185398    3839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 10:13:16.185409    3839 machine.go:97] duration metric: took 560.76975ms to provisionDockerMachine
	I0731 10:13:16.185415    3839 start.go:293] postStartSetup for "running-upgrade-323000" (driver="qemu2")
	I0731 10:13:16.185421    3839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 10:13:16.185476    3839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 10:13:16.185485    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	I0731 10:13:16.220229    3839 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 10:13:16.221743    3839 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 10:13:16.221754    3839 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19349-1049/.minikube/addons for local assets ...
	I0731 10:13:16.221850    3839 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19349-1049/.minikube/files for local assets ...
	I0731 10:13:16.221946    3839 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem -> 15522.pem in /etc/ssl/certs
	I0731 10:13:16.222041    3839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 10:13:16.225621    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem --> /etc/ssl/certs/15522.pem (1708 bytes)
	I0731 10:13:16.233772    3839 start.go:296] duration metric: took 48.349083ms for postStartSetup
	I0731 10:13:16.233791    3839 fix.go:56] duration metric: took 619.075625ms for fixHost
	I0731 10:13:16.233851    3839 main.go:141] libmachine: Using SSH client type: native
	I0731 10:13:16.233988    3839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1046d6a10] 0x1046d9270 <nil>  [] 0s} localhost 50244 <nil> <nil>}
	I0731 10:13:16.233996    3839 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 10:13:16.296906    3839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722445996.034662098
	
	I0731 10:13:16.296917    3839 fix.go:216] guest clock: 1722445996.034662098
	I0731 10:13:16.296921    3839 fix.go:229] Guest: 2024-07-31 10:13:16.034662098 -0700 PDT Remote: 2024-07-31 10:13:16.233793 -0700 PDT m=+2.968086543 (delta=-199.130902ms)
	I0731 10:13:16.296938    3839 fix.go:200] guest clock delta is within tolerance: -199.130902ms
	I0731 10:13:16.296941    3839 start.go:83] releasing machines lock for "running-upgrade-323000", held for 682.256375ms
	I0731 10:13:16.297014    3839 ssh_runner.go:195] Run: cat /version.json
	I0731 10:13:16.297022    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	I0731 10:13:16.297177    3839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 10:13:16.297198    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	W0731 10:13:16.297911    3839 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50390->127.0.0.1:50244: read: connection reset by peer
	I0731 10:13:16.297930    3839 retry.go:31] will retry after 215.505326ms: ssh: handshake failed: read tcp 127.0.0.1:50390->127.0.0.1:50244: read: connection reset by peer
	W0731 10:13:16.329694    3839 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 10:13:16.329760    3839 ssh_runner.go:195] Run: systemctl --version
	I0731 10:13:16.332053    3839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 10:13:16.334005    3839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 10:13:16.334052    3839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 10:13:16.337340    3839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 10:13:16.342473    3839 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 10:13:16.342487    3839 start.go:495] detecting cgroup driver to use...
	I0731 10:13:16.342564    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:13:16.348955    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 10:13:16.353204    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 10:13:16.357082    3839 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 10:13:16.357134    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 10:13:16.360827    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:13:16.364712    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 10:13:16.368729    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:13:16.372205    3839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 10:13:16.376359    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 10:13:16.379948    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 10:13:16.384045    3839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 10:13:16.387798    3839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 10:13:16.390767    3839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 10:13:16.393788    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:13:16.490359    3839 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 10:13:16.498686    3839 start.go:495] detecting cgroup driver to use...
	I0731 10:13:16.498769    3839 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 10:13:16.504656    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 10:13:16.510229    3839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 10:13:16.520654    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 10:13:16.527362    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 10:13:16.532580    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:13:16.538919    3839 ssh_runner.go:195] Run: which cri-dockerd
	I0731 10:13:16.540491    3839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 10:13:16.544059    3839 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 10:13:16.550669    3839 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 10:13:16.644016    3839 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 10:13:16.747008    3839 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 10:13:16.747083    3839 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 10:13:16.753031    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:13:16.848230    3839 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 10:13:20.269729    3839 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.42154175s)
	I0731 10:13:20.269797    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 10:13:20.274429    3839 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 10:13:20.281050    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 10:13:20.286711    3839 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 10:13:20.361062    3839 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 10:13:20.440776    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:13:20.520222    3839 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 10:13:20.526814    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 10:13:20.531330    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:13:20.610940    3839 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 10:13:20.654952    3839 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 10:13:20.655022    3839 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 10:13:20.658667    3839 start.go:563] Will wait 60s for crictl version
	I0731 10:13:20.658714    3839 ssh_runner.go:195] Run: which crictl
	I0731 10:13:20.660410    3839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 10:13:20.673536    3839 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 10:13:20.673600    3839 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 10:13:20.686983    3839 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 10:13:20.710798    3839 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 10:13:20.710903    3839 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 10:13:20.712301    3839 kubeadm.go:883] updating cluster {Name:running-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 10:13:20.712345    3839 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 10:13:20.712386    3839 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 10:13:20.724038    3839 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 10:13:20.724046    3839 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 10:13:20.724092    3839 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 10:13:20.727158    3839 ssh_runner.go:195] Run: which lz4
	I0731 10:13:20.728361    3839 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 10:13:20.729711    3839 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 10:13:20.729728    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 10:13:21.629645    3839 docker.go:649] duration metric: took 901.327958ms to copy over tarball
	I0731 10:13:21.629699    3839 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 10:13:22.817453    3839 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.187762083s)
	I0731 10:13:22.817467    3839 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 10:13:22.834079    3839 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 10:13:22.837299    3839 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 10:13:22.842869    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:13:22.916814    3839 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 10:13:24.069432    3839 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152622708s)
	I0731 10:13:24.069534    3839 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 10:13:24.085419    3839 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 10:13:24.085429    3839 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 10:13:24.085434    3839 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 10:13:24.089448    3839 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:13:24.091260    3839 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:13:24.093458    3839 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:13:24.093517    3839 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:13:24.096495    3839 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:13:24.096574    3839 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:13:24.098724    3839 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:13:24.098823    3839 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 10:13:24.102126    3839 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:13:24.102312    3839 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:13:24.104118    3839 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:13:24.104152    3839 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 10:13:24.106822    3839 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:13:24.106833    3839 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:13:24.108982    3839 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:13:24.110852    3839 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:13:24.488801    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:13:24.499783    3839 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 10:13:24.499816    3839 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:13:24.499870    3839 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:13:24.510525    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 10:13:24.512685    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:13:24.517739    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 10:13:24.520262    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 10:13:24.524719    3839 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 10:13:24.524742    3839 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:13:24.524803    3839 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:13:24.535337    3839 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 10:13:24.535337    3839 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 10:13:24.535363    3839 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:13:24.535375    3839 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 10:13:24.535427    3839 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 10:13:24.535427    3839 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 10:13:24.550964    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0731 10:13:24.554407    3839 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 10:13:24.554521    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:13:24.556248    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 10:13:24.556253    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 10:13:24.556359    3839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 10:13:24.556361    3839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 10:13:24.567389    3839 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 10:13:24.567398    3839 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 10:13:24.567419    3839 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:13:24.567421    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 10:13:24.567391    3839 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 10:13:24.567462    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 10:13:24.567464    3839 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:13:24.575678    3839 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 10:13:24.575691    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 10:13:24.614278    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 10:13:24.614407    3839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 10:13:24.619943    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:13:24.626019    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:13:24.628152    3839 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 10:13:24.630590    3839 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 10:13:24.630612    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 10:13:24.647485    3839 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 10:13:24.647507    3839 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:13:24.647560    3839 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:13:24.662467    3839 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 10:13:24.662488    3839 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:13:24.662539    3839 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:13:24.684571    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 10:13:24.710220    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 10:13:24.737756    3839 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 10:13:24.737771    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0731 10:13:24.782018    3839 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 10:13:24.782121    3839 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:13:24.844672    3839 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 10:13:24.844713    3839 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 10:13:24.844738    3839 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:13:24.844794    3839 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:13:24.930082    3839 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 10:13:24.930096    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 10:13:26.019383    3839 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.174571417s)
	I0731 10:13:26.019422    3839 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 10:13:26.019532    3839 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.089438834s)
	I0731 10:13:26.019550    3839 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 10:13:26.019871    3839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 10:13:26.024857    3839 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 10:13:26.024911    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 10:13:26.091609    3839 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 10:13:26.091625    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 10:13:26.324388    3839 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 10:13:26.324428    3839 cache_images.go:92] duration metric: took 2.239025042s to LoadCachedImages
	W0731 10:13:26.324469    3839 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0731 10:13:26.324475    3839 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 10:13:26.324533    3839 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-323000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 10:13:26.324593    3839 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 10:13:26.338116    3839 cni.go:84] Creating CNI manager for ""
	I0731 10:13:26.338127    3839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:13:26.338132    3839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 10:13:26.338141    3839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-323000 NodeName:running-upgrade-323000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 10:13:26.338215    3839 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-323000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 10:13:26.338272    3839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 10:13:26.341762    3839 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 10:13:26.341791    3839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 10:13:26.344977    3839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 10:13:26.350088    3839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 10:13:26.355165    3839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 10:13:26.360460    3839 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 10:13:26.361849    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:13:26.444944    3839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 10:13:26.450198    3839 certs.go:68] Setting up /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000 for IP: 10.0.2.15
	I0731 10:13:26.450205    3839 certs.go:194] generating shared ca certs ...
	I0731 10:13:26.450213    3839 certs.go:226] acquiring lock for ca certs: {Name:mkb79625058c6882a075e84437e973267a00b06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:13:26.450375    3839 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.key
	I0731 10:13:26.450410    3839 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.key
	I0731 10:13:26.450416    3839 certs.go:256] generating profile certs ...
	I0731 10:13:26.450484    3839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/client.key
	I0731 10:13:26.450499    3839 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.key.42573e32
	I0731 10:13:26.450508    3839 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.crt.42573e32 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 10:13:26.525177    3839 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.crt.42573e32 ...
	I0731 10:13:26.525184    3839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.crt.42573e32: {Name:mk69e75ef9d8dc44219ba90735a93ec73c378558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:13:26.525570    3839 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.key.42573e32 ...
	I0731 10:13:26.525575    3839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.key.42573e32: {Name:mk9fe4bce4819c3c3fac2111a9877876b83801e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:13:26.525728    3839 certs.go:381] copying /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.crt.42573e32 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.crt
	I0731 10:13:26.525861    3839 certs.go:385] copying /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.key.42573e32 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.key
	I0731 10:13:26.526000    3839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/proxy-client.key
	I0731 10:13:26.526133    3839 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552.pem (1338 bytes)
	W0731 10:13:26.526158    3839 certs.go:480] ignoring /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552_empty.pem, impossibly tiny 0 bytes
	I0731 10:13:26.526163    3839 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 10:13:26.526183    3839 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem (1082 bytes)
	I0731 10:13:26.526203    3839 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem (1123 bytes)
	I0731 10:13:26.526220    3839 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem (1679 bytes)
	I0731 10:13:26.526261    3839 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem (1708 bytes)
	I0731 10:13:26.526575    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 10:13:26.534205    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 10:13:26.542147    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 10:13:26.549839    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 10:13:26.557568    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 10:13:26.570203    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 10:13:26.602391    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 10:13:26.614920    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 10:13:26.633345    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552.pem --> /usr/share/ca-certificates/1552.pem (1338 bytes)
	I0731 10:13:26.640085    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem --> /usr/share/ca-certificates/15522.pem (1708 bytes)
	I0731 10:13:26.646997    3839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 10:13:26.658435    3839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 10:13:26.671977    3839 ssh_runner.go:195] Run: openssl version
	I0731 10:13:26.673931    3839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1552.pem && ln -fs /usr/share/ca-certificates/1552.pem /etc/ssl/certs/1552.pem"
	I0731 10:13:26.680328    3839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1552.pem
	I0731 10:13:26.684684    3839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:45 /usr/share/ca-certificates/1552.pem
	I0731 10:13:26.684732    3839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1552.pem
	I0731 10:13:26.687199    3839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1552.pem /etc/ssl/certs/51391683.0"
	I0731 10:13:26.692102    3839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15522.pem && ln -fs /usr/share/ca-certificates/15522.pem /etc/ssl/certs/15522.pem"
	I0731 10:13:26.696619    3839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15522.pem
	I0731 10:13:26.712004    3839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:45 /usr/share/ca-certificates/15522.pem
	I0731 10:13:26.712053    3839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15522.pem
	I0731 10:13:26.717340    3839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15522.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 10:13:26.723670    3839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 10:13:26.743821    3839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:13:26.756655    3839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:39 /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:13:26.756704    3839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:13:26.759836    3839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 10:13:26.766328    3839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 10:13:26.771914    3839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 10:13:26.776016    3839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 10:13:26.778687    3839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 10:13:26.782334    3839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 10:13:26.784743    3839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 10:13:26.786856    3839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 10:13:26.795099    3839 kubeadm.go:392] StartCluster: {Name:running-upgrade-323000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50276 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:13:26.795198    3839 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 10:13:26.825384    3839 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 10:13:26.831868    3839 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 10:13:26.831874    3839 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 10:13:26.831903    3839 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 10:13:26.835206    3839 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 10:13:26.835426    3839 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-323000" does not appear in /Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:13:26.835480    3839 kubeconfig.go:62] /Users/jenkins/minikube-integration/19349-1049/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-323000" cluster setting kubeconfig missing "running-upgrade-323000" context setting]
	I0731 10:13:26.835612    3839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/kubeconfig: {Name:mk7aa7304788739007dd5984d3ae0414f5535a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:13:26.836057    3839 kapi.go:59] client config for running-upgrade-323000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/client.key", CAFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a6c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:13:26.836386    3839 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 10:13:26.839703    3839 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-323000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 10:13:26.839708    3839 kubeadm.go:1160] stopping kube-system containers ...
	I0731 10:13:26.839745    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 10:13:26.859656    3839 docker.go:483] Stopping containers: [5f107d2cbaf4 82101332123f 2d9f615ac5ca 85cfac70d488 8c582eb2e0c5 5c44dbc3b8e2 a0cd6a4eb287 8d90fb0de64c 25ad9f579a3d d76f08334aba 62db2bf8d9e4 0e4e813decc9 402fc84b7d90 3e09c6100d3e d81254592630 d9c48925a8ca b0d84c96d0f6 cc34cdf47c2d]
	I0731 10:13:26.859729    3839 ssh_runner.go:195] Run: docker stop 5f107d2cbaf4 82101332123f 2d9f615ac5ca 85cfac70d488 8c582eb2e0c5 5c44dbc3b8e2 a0cd6a4eb287 8d90fb0de64c 25ad9f579a3d d76f08334aba 62db2bf8d9e4 0e4e813decc9 402fc84b7d90 3e09c6100d3e d81254592630 d9c48925a8ca b0d84c96d0f6 cc34cdf47c2d
	I0731 10:13:27.030353    3839 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 10:13:27.107656    3839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:13:27.111299    3839 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 31 17:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 31 17:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 31 17:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 31 17:13 /etc/kubernetes/scheduler.conf
	
	I0731 10:13:27.111329    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf
	I0731 10:13:27.114391    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 10:13:27.114418    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 10:13:27.117558    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf
	I0731 10:13:27.120515    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 10:13:27.120542    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 10:13:27.122977    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf
	I0731 10:13:27.125693    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 10:13:27.125713    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 10:13:27.128710    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf
	I0731 10:13:27.131267    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 10:13:27.131287    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 10:13:27.133976    3839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:13:27.137252    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:13:27.159600    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:13:27.994096    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:13:28.189982    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:13:28.216325    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:13:28.237937    3839 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:13:28.238007    3839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:13:28.740076    3839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:13:29.239286    3839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:13:29.740063    3839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:13:29.744655    3839 api_server.go:72] duration metric: took 1.506743667s to wait for apiserver process to appear ...
	I0731 10:13:29.744667    3839 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:13:29.744677    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:34.745806    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:34.745828    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:39.746689    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:39.746825    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:44.747496    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:44.747544    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:49.748067    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:49.748150    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:54.749040    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:54.749078    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:59.749965    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:59.750015    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:04.751239    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:04.751316    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:09.753366    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:09.753433    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:14.754063    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:14.754082    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:19.756136    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:19.756166    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:24.758286    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:24.758330    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:29.761599    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:29.761788    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:14:29.778717    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:14:29.778806    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:14:29.792076    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:14:29.792148    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:14:29.804176    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:14:29.804234    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:14:29.814742    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:14:29.814807    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:14:29.824791    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:14:29.824856    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:14:29.836196    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:14:29.836262    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:14:29.845982    3839 logs.go:276] 0 containers: []
	W0731 10:14:29.846003    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:14:29.846054    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:14:29.856655    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:14:29.856676    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:14:29.856681    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:14:29.900234    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:29.900332    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:29.901893    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:14:29.901898    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:14:29.915608    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:14:29.915619    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:14:29.926388    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:14:29.926401    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:14:29.938084    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:14:29.938094    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:14:29.949437    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:14:29.949448    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:14:29.960568    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:14:29.960587    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:14:29.972365    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:14:29.972376    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:14:29.977362    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:14:29.977371    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:14:29.988569    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:14:29.988581    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:14:30.000365    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:14:30.000377    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:14:30.018174    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:14:30.018184    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:14:30.089194    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:14:30.089207    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:14:30.107925    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:14:30.107943    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:14:30.127351    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:14:30.127365    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:14:30.138539    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:14:30.138551    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:14:30.150940    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:14:30.150950    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:14:30.176624    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:30.176632    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:14:30.176661    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:14:30.176665    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:30.176669    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:30.176676    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:30.176678    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:14:40.182279    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:45.185110    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:45.185449    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:14:45.223499    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:14:45.223637    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:14:45.252301    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:14:45.252391    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:14:45.265759    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:14:45.265834    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:14:45.277439    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:14:45.277517    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:14:45.288207    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:14:45.288273    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:14:45.299464    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:14:45.299539    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:14:45.309856    3839 logs.go:276] 0 containers: []
	W0731 10:14:45.309868    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:14:45.309929    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:14:45.321273    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:14:45.321291    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:14:45.321296    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:14:45.348196    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:14:45.348204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:14:45.360271    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:14:45.360284    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:14:45.379191    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:14:45.379204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:14:45.390778    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:14:45.390791    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:14:45.407033    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:14:45.407047    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:14:45.421600    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:14:45.421611    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:14:45.439347    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:14:45.439359    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:14:45.450865    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:14:45.450875    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:14:45.468909    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:14:45.468922    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:14:45.485745    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:14:45.485756    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:14:45.497441    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:14:45.497453    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:14:45.540229    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:45.540322    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:45.541826    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:14:45.541831    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:14:45.579590    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:14:45.579601    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:14:45.591822    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:14:45.591834    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:14:45.609282    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:14:45.609292    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:14:45.613718    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:14:45.613728    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:14:45.625910    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:45.625921    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:14:45.625949    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:14:45.625956    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:45.625962    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:45.625966    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:45.625970    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:14:55.630642    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:00.632559    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:00.632832    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:00.659861    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:00.659976    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:00.678693    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:00.678773    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:00.692541    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:00.692612    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:00.704646    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:00.704723    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:00.714894    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:00.714961    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:00.725634    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:00.725702    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:00.735393    3839 logs.go:276] 0 containers: []
	W0731 10:15:00.735405    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:00.735453    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:00.746143    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:00.746160    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:00.746165    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:00.761629    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:00.761640    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:00.777308    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:00.777318    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:00.788846    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:00.788858    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:00.793380    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:00.793388    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:00.804572    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:00.804585    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:00.832502    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:00.832511    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:00.875318    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:00.875330    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:00.889777    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:00.889788    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:00.907289    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:00.907302    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:00.918385    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:00.918400    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:00.931028    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:00.931041    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:00.975420    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:00.975515    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:00.977112    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:00.977121    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:00.988879    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:00.988888    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:01.000147    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:01.000161    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:01.011584    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:01.011597    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:01.025310    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:01.025320    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:01.036311    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:01.036324    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:01.036352    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:15:01.036357    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:01.036360    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:01.036364    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:01.036367    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:11.039630    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:16.041960    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:16.042148    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:16.061192    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:16.061291    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:16.076544    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:16.076634    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:16.088716    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:16.088788    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:16.099551    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:16.099619    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:16.110093    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:16.110192    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:16.120889    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:16.120955    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:16.131261    3839 logs.go:276] 0 containers: []
	W0731 10:15:16.131271    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:16.131331    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:16.141674    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:16.141690    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:16.141695    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:16.157661    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:16.157671    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:16.176396    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:16.176411    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:16.187464    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:16.187477    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:16.199418    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:16.199427    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:16.242453    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:16.242547    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:16.244155    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:16.244159    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:16.255086    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:16.255099    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:16.266439    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:16.266453    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:16.278038    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:16.278050    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:16.304228    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:16.304236    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:16.308265    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:16.308272    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:16.323888    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:16.323900    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:16.335307    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:16.335318    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:16.352788    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:16.352799    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:16.388267    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:16.388278    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:16.403989    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:16.404000    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:16.417786    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:16.417797    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:16.429644    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:16.429656    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:16.429685    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:15:16.429689    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:16.429692    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:16.429697    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:16.429699    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:26.433759    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:31.434518    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:31.434712    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:31.451152    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:31.451243    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:31.464204    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:31.464276    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:31.475653    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:31.475720    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:31.486304    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:31.486375    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:31.496400    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:31.496466    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:31.506773    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:31.506844    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:31.517388    3839 logs.go:276] 0 containers: []
	W0731 10:15:31.517399    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:31.517457    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:31.528135    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:31.528153    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:31.528158    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:31.540443    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:31.540453    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:31.556996    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:31.557007    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:31.568401    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:31.568411    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:31.593523    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:31.593544    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:31.604609    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:31.604627    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:31.616528    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:31.616545    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:31.659799    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:31.659892    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:31.661414    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:31.661419    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:31.696226    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:31.696236    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:31.711543    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:31.711553    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:31.723154    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:31.723164    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:31.736795    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:31.736809    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:31.748296    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:31.748307    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:31.752759    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:31.752765    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:31.765287    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:31.765299    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:31.782658    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:31.782675    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:31.794526    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:31.794537    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:31.808708    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:31.808718    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:31.808750    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:15:31.808756    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:31.808771    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:31.808776    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:31.808780    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:41.812194    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:46.814408    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:46.814613    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:46.828983    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:46.829062    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:46.840944    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:46.841016    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:46.851158    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:46.851224    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:46.865645    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:46.865710    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:46.876366    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:46.876433    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:46.887239    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:46.887314    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:46.897187    3839 logs.go:276] 0 containers: []
	W0731 10:15:46.897199    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:46.897256    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:46.907741    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:46.907759    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:46.907765    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:46.945427    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:46.945437    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:46.959651    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:46.959663    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:46.973773    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:46.973784    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:46.984938    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:46.984950    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:46.996242    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:46.996252    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:47.000576    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:47.000582    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:47.011986    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:47.012000    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:47.029257    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:47.029268    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:47.040705    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:47.040715    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:47.051827    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:47.051837    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:47.077557    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:47.077565    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:47.088790    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:47.088801    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:47.100349    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:47.100359    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:47.111895    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:47.111905    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:47.123666    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:47.123676    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:47.142833    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:47.142849    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:47.186829    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:47.186932    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:47.188553    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:47.188559    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:47.188591    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:15:47.188600    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:47.188604    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:47.188606    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:47.188609    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:57.192693    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:02.195379    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:02.195652    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:02.226401    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:02.226527    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:02.243627    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:02.243716    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:02.256747    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:02.256824    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:02.267853    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:02.267918    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:02.278156    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:02.278227    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:02.289102    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:02.289171    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:02.298922    3839 logs.go:276] 0 containers: []
	W0731 10:16:02.298932    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:02.298984    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:02.309334    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:02.309352    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:02.309358    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:02.328102    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:02.328111    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:02.339928    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:02.339942    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:02.380278    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:02.380371    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:02.381876    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:02.381881    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:02.386496    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:02.386504    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:02.402754    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:02.402767    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:02.421670    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:02.421680    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:02.435821    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:02.435833    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:02.447689    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:02.447700    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:02.459340    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:02.459351    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:02.472369    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:02.472380    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:02.509244    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:02.509257    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:02.520652    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:02.520662    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:02.531596    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:02.531611    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:02.555350    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:02.555358    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:02.566477    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:02.566489    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:02.588977    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:02.588987    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:02.605809    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:02.605821    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:02.605850    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:16:02.605856    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:02.605860    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:02.605865    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:02.605869    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:12.609643    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:17.612252    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:17.612347    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:17.623408    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:17.623477    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:17.634202    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:17.634270    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:17.644629    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:17.644687    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:17.655520    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:17.655597    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:17.666168    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:17.666235    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:17.676373    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:17.676457    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:17.687125    3839 logs.go:276] 0 containers: []
	W0731 10:16:17.687137    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:17.687192    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:17.697440    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:17.697457    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:17.697462    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:17.709681    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:17.709696    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:17.721356    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:17.721369    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:17.733478    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:17.733494    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:17.744946    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:17.744961    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:17.786475    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:17.786567    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:17.788077    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:17.788101    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:17.799637    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:17.799648    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:17.811185    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:17.811195    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:17.836906    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:17.836917    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:17.849123    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:17.849135    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:17.860513    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:17.860529    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:17.871625    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:17.871637    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:17.885239    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:17.885248    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:17.897021    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:17.897035    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:17.942267    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:17.942281    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:17.956306    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:17.956314    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:17.973270    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:17.973284    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:17.978323    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:17.978330    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:17.978361    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:16:17.978366    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:17.978369    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:17.978373    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:17.978376    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:27.982379    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:32.984682    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:32.984795    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:32.996326    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:32.996401    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:33.011594    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:33.011667    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:33.022225    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:33.022293    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:33.035941    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:33.036011    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:33.046256    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:33.046318    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:33.056827    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:33.056891    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:33.067896    3839 logs.go:276] 0 containers: []
	W0731 10:16:33.067905    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:33.067954    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:33.078478    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:33.078496    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:33.078501    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:33.089807    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:33.089821    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:33.107293    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:33.107305    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:33.118642    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:33.118654    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:33.130613    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:33.130625    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:33.171684    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:33.171778    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:33.173340    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:33.173345    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:33.187596    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:33.187606    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:33.224359    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:33.224370    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:33.236146    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:33.236157    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:33.248010    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:33.248020    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:33.259960    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:33.259971    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:33.283932    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:33.283940    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:33.288012    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:33.288018    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:33.307026    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:33.307040    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:33.322860    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:33.322875    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:33.342530    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:33.342541    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:33.354209    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:33.354222    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:33.365913    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:33.365923    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:33.365949    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:16:33.365955    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:33.365959    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:33.365963    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:33.365974    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:43.368024    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:48.369641    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:48.369945    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:48.418737    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:48.418863    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:48.435712    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:48.435795    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:48.448583    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:48.448661    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:48.459575    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:48.459650    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:48.470237    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:48.470304    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:48.483508    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:48.483577    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:48.493893    3839 logs.go:276] 0 containers: []
	W0731 10:16:48.493904    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:48.493963    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:48.504784    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:48.504800    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:48.504805    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:48.529389    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:48.529396    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:48.543491    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:48.543503    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:48.561374    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:48.561386    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:48.572870    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:48.572883    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:48.584634    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:48.584645    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:48.596889    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:48.596898    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:48.608826    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:48.608839    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:48.644657    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:48.644668    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:48.663689    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:48.663705    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:48.678142    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:48.678152    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:48.692217    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:48.692230    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:48.703869    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:48.703881    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:48.716044    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:48.716060    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:48.727704    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:48.727714    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:48.769252    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:48.769346    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:48.770926    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:48.770934    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:48.775657    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:48.775665    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:48.787694    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:48.787703    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:48.787728    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:16:48.787732    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:48.787735    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:48.787741    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:48.787744    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:58.791732    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:03.793338    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:03.793465    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:03.821214    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:17:03.821290    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:03.857690    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:17:03.857836    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:03.877321    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:17:03.877396    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:03.887686    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:17:03.887759    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:03.898760    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:17:03.898829    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:03.909349    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:17:03.909441    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:03.919201    3839 logs.go:276] 0 containers: []
	W0731 10:17:03.919210    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:03.919268    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:03.930597    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:17:03.930613    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:17:03.930619    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:17:03.945341    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:17:03.945350    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:03.957344    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:17:03.957357    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:17:03.968538    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:17:03.968552    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:17:03.979883    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:17:03.979898    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:17:03.991597    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:17:03.991611    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:17:04.003566    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:17:04.003575    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:17:04.014878    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:04.014890    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:04.019836    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:17:04.019842    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:17:04.034548    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:17:04.034556    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:17:04.046092    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:17:04.046102    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:17:04.063823    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:04.063834    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:04.087934    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:04.087942    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:17:04.130720    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:04.130819    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:04.132420    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:04.132425    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:04.168823    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:17:04.168836    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:17:04.180348    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:17:04.180362    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:17:04.191606    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:17:04.191617    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:17:04.205978    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:04.205991    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:17:04.206017    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:17:04.206021    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:04.206025    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:04.206028    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:04.206031    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:17:14.210075    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:19.212476    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:19.212836    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:19.245962    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:17:19.246107    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:19.266082    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:17:19.266182    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:19.281567    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:17:19.281649    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:19.293956    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:17:19.294030    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:19.305229    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:17:19.305299    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:19.316092    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:17:19.316154    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:19.327171    3839 logs.go:276] 0 containers: []
	W0731 10:17:19.327184    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:19.327240    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:19.338516    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:17:19.338535    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:19.338542    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:19.343158    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:17:19.343167    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:17:19.357104    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:17:19.357115    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:17:19.368978    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:17:19.368989    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:17:19.380327    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:17:19.380340    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:17:19.392128    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:19.392140    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:17:19.436146    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:19.436246    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:19.437780    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:17:19.437787    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:17:19.455502    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:17:19.455520    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:17:19.467469    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:19.467482    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:19.489497    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:17:19.489507    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:19.500721    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:19.500734    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:19.534770    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:17:19.534780    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:17:19.546154    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:17:19.546167    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:17:19.558179    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:17:19.558193    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:17:19.579779    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:17:19.579793    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:17:19.591482    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:17:19.591492    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:17:19.602738    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:17:19.602752    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:17:19.616595    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:19.616607    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:17:19.616635    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:17:19.616639    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:19.616644    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:19.616650    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:19.616653    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:17:29.620670    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:34.622943    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:34.623018    3839 kubeadm.go:597] duration metric: took 4m7.790766708s to restartPrimaryControlPlane
	W0731 10:17:34.623064    3839 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 10:17:34.623086    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 10:17:35.616570    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:17:35.621528    3839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:17:35.624989    3839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:17:35.628034    3839 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:17:35.628038    3839 kubeadm.go:157] found existing configuration files:
	
	I0731 10:17:35.628060    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf
	I0731 10:17:35.630524    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 10:17:35.630542    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 10:17:35.633664    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf
	I0731 10:17:35.636998    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 10:17:35.637020    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 10:17:35.639688    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf
	I0731 10:17:35.642134    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 10:17:35.642151    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 10:17:35.645266    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf
	I0731 10:17:35.647889    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 10:17:35.647905    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 10:17:35.650476    3839 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 10:17:35.666651    3839 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 10:17:35.666770    3839 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 10:17:35.717505    3839 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 10:17:35.717579    3839 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 10:17:35.717633    3839 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 10:17:35.766725    3839 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 10:17:35.770920    3839 out.go:204]   - Generating certificates and keys ...
	I0731 10:17:35.770959    3839 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 10:17:35.770993    3839 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 10:17:35.771032    3839 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 10:17:35.771063    3839 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 10:17:35.771098    3839 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 10:17:35.771128    3839 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 10:17:35.771168    3839 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 10:17:35.771212    3839 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 10:17:35.771248    3839 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 10:17:35.771290    3839 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 10:17:35.771313    3839 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 10:17:35.771342    3839 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 10:17:35.932322    3839 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 10:17:36.033553    3839 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 10:17:36.197483    3839 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 10:17:36.235885    3839 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 10:17:36.265829    3839 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 10:17:36.265878    3839 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 10:17:36.265921    3839 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 10:17:36.357052    3839 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 10:17:36.361274    3839 out.go:204]   - Booting up control plane ...
	I0731 10:17:36.361318    3839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 10:17:36.361356    3839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 10:17:36.361450    3839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 10:17:36.361700    3839 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 10:17:36.362487    3839 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 10:17:40.868256    3839 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504953 seconds
	I0731 10:17:40.868336    3839 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 10:17:40.871293    3839 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 10:17:41.397949    3839 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 10:17:41.398404    3839 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-323000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 10:17:41.902396    3839 kubeadm.go:310] [bootstrap-token] Using token: jjgv1s.cijpkg01pfwrwhoo
	I0731 10:17:41.906647    3839 out.go:204]   - Configuring RBAC rules ...
	I0731 10:17:41.906737    3839 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 10:17:41.908993    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 10:17:41.914796    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 10:17:41.915792    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 10:17:41.917011    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 10:17:41.918078    3839 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 10:17:41.921800    3839 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 10:17:42.083496    3839 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 10:17:42.310912    3839 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 10:17:42.311295    3839 kubeadm.go:310] 
	I0731 10:17:42.311329    3839 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 10:17:42.311332    3839 kubeadm.go:310] 
	I0731 10:17:42.311369    3839 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 10:17:42.311376    3839 kubeadm.go:310] 
	I0731 10:17:42.311390    3839 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 10:17:42.311423    3839 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 10:17:42.311453    3839 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 10:17:42.311456    3839 kubeadm.go:310] 
	I0731 10:17:42.311484    3839 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 10:17:42.311487    3839 kubeadm.go:310] 
	I0731 10:17:42.311509    3839 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 10:17:42.311513    3839 kubeadm.go:310] 
	I0731 10:17:42.311539    3839 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 10:17:42.311577    3839 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 10:17:42.311614    3839 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 10:17:42.311617    3839 kubeadm.go:310] 
	I0731 10:17:42.311662    3839 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 10:17:42.311710    3839 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 10:17:42.311715    3839 kubeadm.go:310] 
	I0731 10:17:42.311761    3839 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jjgv1s.cijpkg01pfwrwhoo \
	I0731 10:17:42.311814    3839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 \
	I0731 10:17:42.311827    3839 kubeadm.go:310] 	--control-plane 
	I0731 10:17:42.311830    3839 kubeadm.go:310] 
	I0731 10:17:42.311874    3839 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 10:17:42.311879    3839 kubeadm.go:310] 
	I0731 10:17:42.311926    3839 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jjgv1s.cijpkg01pfwrwhoo \
	I0731 10:17:42.311979    3839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 
	I0731 10:17:42.312039    3839 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 10:17:42.312045    3839 cni.go:84] Creating CNI manager for ""
	I0731 10:17:42.312053    3839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:17:42.316523    3839 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 10:17:42.324483    3839 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 10:17:42.327321    3839 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 10:17:42.332247    3839 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 10:17:42.332301    3839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:17:42.332302    3839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-323000 minikube.k8s.io/updated_at=2024_07_31T10_17_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=running-upgrade-323000 minikube.k8s.io/primary=true
	I0731 10:17:42.361886    3839 ops.go:34] apiserver oom_adj: -16
	I0731 10:17:42.361973    3839 kubeadm.go:1113] duration metric: took 29.707708ms to wait for elevateKubeSystemPrivileges
	I0731 10:17:42.374837    3839 kubeadm.go:394] duration metric: took 4m15.579495208s to StartCluster
	I0731 10:17:42.374854    3839 settings.go:142] acquiring lock: {Name:mk1157c8925fbba0a07eb1816bde23d6b74fa3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:17:42.374943    3839 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:17:42.375369    3839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/kubeconfig: {Name:mk7aa7304788739007dd5984d3ae0414f5535a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:17:42.375579    3839 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:17:42.375594    3839 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 10:17:42.375632    3839 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-323000"
	I0731 10:17:42.375643    3839 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-323000"
	W0731 10:17:42.375647    3839 addons.go:243] addon storage-provisioner should already be in state true
	I0731 10:17:42.375662    3839 config.go:182] Loaded profile config "running-upgrade-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:17:42.375665    3839 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-323000"
	I0731 10:17:42.375670    3839 host.go:66] Checking if "running-upgrade-323000" exists ...
	I0731 10:17:42.375677    3839 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-323000"
	I0731 10:17:42.376558    3839 kapi.go:59] client config for running-upgrade-323000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/client.key", CAFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a6c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:17:42.376704    3839 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-323000"
	W0731 10:17:42.376709    3839 addons.go:243] addon default-storageclass should already be in state true
	I0731 10:17:42.376716    3839 host.go:66] Checking if "running-upgrade-323000" exists ...
	I0731 10:17:42.378388    3839 out.go:177] * Verifying Kubernetes components...
	I0731 10:17:42.378769    3839 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 10:17:42.382644    3839 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 10:17:42.382651    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	I0731 10:17:42.386540    3839 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:17:42.390462    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:17:42.394474    3839 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:17:42.394480    3839 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 10:17:42.394486    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	I0731 10:17:42.478240    3839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 10:17:42.483480    3839 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:17:42.483519    3839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:17:42.487684    3839 api_server.go:72] duration metric: took 112.094666ms to wait for apiserver process to appear ...
	I0731 10:17:42.487692    3839 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:17:42.487698    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:42.532540    3839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 10:17:42.547212    3839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:17:47.489836    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:47.489909    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:52.490594    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:52.490624    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:57.491174    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:57.491224    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:02.491760    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:02.491793    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:07.492567    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:07.492614    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:12.493598    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:12.493637    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 10:18:12.849464    3839 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 10:18:12.854370    3839 out.go:177] * Enabled addons: storage-provisioner
	I0731 10:18:12.861278    3839 addons.go:510] duration metric: took 30.48618s for enable addons: enabled=[storage-provisioner]
	I0731 10:18:17.494966    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:17.495001    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:22.496637    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:22.496685    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:27.498896    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:27.498923    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:32.501013    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:32.501056    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:37.502594    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:37.502615    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:42.504750    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:42.504864    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:42.516339    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:18:42.516416    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:42.527488    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:18:42.527569    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:42.539790    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:18:42.539868    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:42.558385    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:18:42.558462    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:42.584991    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:18:42.585135    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:42.606547    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:18:42.606612    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:42.625319    3839 logs.go:276] 0 containers: []
	W0731 10:18:42.625334    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:42.625399    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:42.637181    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:18:42.637201    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:42.637207    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:18:42.657927    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:42.658032    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:42.676160    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:18:42.676180    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:18:42.695395    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:18:42.695407    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:18:42.707418    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:18:42.707432    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:18:42.726674    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:18:42.726687    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:18:42.740143    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:42.740156    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:42.767432    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:18:42.767446    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:42.779622    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:42.779633    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:42.784523    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:42.784533    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:42.827804    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:18:42.827818    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:18:42.843004    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:18:42.843016    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:18:42.858341    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:18:42.858353    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:18:42.878182    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:18:42.878204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:18:42.892120    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:42.892130    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:18:42.892158    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:18:42.892164    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:42.892169    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:42.892173    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:42.892176    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:18:52.896143    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:57.898803    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:57.898916    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:57.911333    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:18:57.911414    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:57.921673    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:18:57.921744    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:57.931990    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:18:57.932060    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:57.943231    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:18:57.943298    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:57.954142    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:18:57.954210    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:57.964884    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:18:57.964949    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:57.983818    3839 logs.go:276] 0 containers: []
	W0731 10:18:57.983830    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:57.983896    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:57.996645    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:18:57.996663    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:18:57.996668    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:18:58.015616    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:18:58.015628    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:18:58.029641    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:18:58.029651    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:18:58.041907    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:18:58.041919    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:18:58.057772    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:18:58.057782    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:18:58.072548    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:18:58.072562    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:18:58.084723    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:18:58.084734    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:18:58.097680    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:18:58.097694    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:18:58.109832    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:58.109846    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:58.133334    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:58.133342    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:18:58.152040    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:58.152134    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:58.169864    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:58.169878    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:58.174383    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:58.174389    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:58.209779    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:18:58.209794    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:58.221370    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:58.221384    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:18:58.221411    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:18:58.221416    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:58.221422    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:58.221427    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:58.221431    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:19:08.225365    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:13.227525    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:13.227715    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:13.240203    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:13.240276    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:13.251138    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:13.251210    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:13.267023    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:19:13.267092    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:13.277320    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:13.277387    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:13.288003    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:13.288067    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:13.298369    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:13.298428    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:13.308930    3839 logs.go:276] 0 containers: []
	W0731 10:19:13.308943    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:13.308998    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:13.319454    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:13.319467    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:13.319473    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:13.339471    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:13.339566    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:13.356994    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:13.357002    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:13.362402    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:13.362409    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:13.374198    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:13.374210    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:13.399213    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:13.399222    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:13.411483    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:13.411494    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:13.448740    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:13.448751    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:13.465413    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:13.465429    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:13.479989    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:13.480000    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:13.492690    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:13.492700    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:13.510020    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:13.510034    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:13.521519    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:13.521529    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:13.546485    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:13.546493    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:13.557790    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:13.557800    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:13.557832    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:19:13.557837    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:13.557841    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:13.557847    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:13.557850    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:19:23.561823    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:28.564037    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:28.564125    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:28.576131    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:28.576196    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:28.587832    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:28.587900    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:28.609607    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:19:28.609672    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:28.621375    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:28.621444    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:28.634003    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:28.634073    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:28.646131    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:28.646205    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:28.658004    3839 logs.go:276] 0 containers: []
	W0731 10:19:28.658016    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:28.658073    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:28.670638    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:28.670654    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:28.670659    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:28.691302    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:28.691398    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:28.709336    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:28.709349    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:28.747307    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:28.747321    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:28.762571    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:28.762582    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:28.777075    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:28.777088    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:28.791853    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:28.791868    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:28.804612    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:28.804626    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:28.816547    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:28.816563    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:28.820807    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:28.820816    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:28.833902    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:28.833913    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:28.846381    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:28.846395    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:28.858492    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:28.858501    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:28.876070    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:28.876080    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:28.899462    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:28.899471    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:28.899497    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:19:28.899502    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:28.899505    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:28.899528    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:28.899532    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:19:38.901827    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:43.904277    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:43.904575    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:43.940173    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:43.940271    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:43.960175    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:43.960244    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:43.973726    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:19:43.973804    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:43.985415    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:43.985486    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:43.998975    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:43.999047    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:44.010181    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:44.010242    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:44.021421    3839 logs.go:276] 0 containers: []
	W0731 10:19:44.021430    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:44.021480    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:44.036270    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:44.036286    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:44.036292    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:44.053478    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:44.053573    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:44.070765    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:44.070773    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:44.108641    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:44.108658    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:44.124110    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:44.124118    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:44.137330    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:44.137339    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:44.156757    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:44.156769    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:44.183891    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:44.183907    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:44.196969    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:44.196980    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:44.201861    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:44.201872    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:44.217802    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:44.217817    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:44.231141    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:44.231154    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:44.247573    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:44.247590    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:44.260884    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:44.260895    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:44.274703    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:44.274714    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:44.274739    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:19:44.274743    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:44.274760    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:44.274764    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:44.274767    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:19:54.278736    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:59.281231    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:59.281708    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:59.318544    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:59.318688    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:59.339507    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:59.339630    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:59.354328    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:19:59.354411    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:59.366679    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:59.366746    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:59.377121    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:59.377195    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:59.387777    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:59.387841    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:59.398724    3839 logs.go:276] 0 containers: []
	W0731 10:19:59.398735    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:59.398789    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:59.409337    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:59.409353    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:59.409358    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:59.414539    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:59.414550    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:59.448975    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:59.448986    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:59.463800    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:59.463813    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:59.481501    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:59.481513    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:59.500936    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:59.501029    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:59.518240    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:59.518245    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:59.535192    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:19:59.535204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:19:59.551892    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:59.551904    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:59.563054    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:19:59.563063    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:19:59.574984    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:59.574994    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:59.586494    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:59.586504    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:59.611972    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:59.611980    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:59.623956    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:59.623965    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:59.650617    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:59.650627    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:59.663052    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:59.663063    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:59.676289    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:59.676302    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:59.676331    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:19:59.676336    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:59.676340    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:59.676343    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:59.676346    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:09.678136    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:14.680432    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:14.680695    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:14.704308    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:20:14.704402    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:14.721078    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:20:14.721160    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:14.734289    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:20:14.734373    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:14.745516    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:20:14.745586    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:14.757072    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:20:14.757140    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:14.767444    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:20:14.767510    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:14.778283    3839 logs.go:276] 0 containers: []
	W0731 10:20:14.778298    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:14.778351    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:14.789031    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:20:14.789048    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:20:14.789053    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:20:14.808667    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:20:14.808680    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:20:14.820661    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:20:14.820671    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:20:14.832850    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:20:14.832861    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:20:14.848680    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:14.848691    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:14.853812    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:20:14.853822    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:20:14.868391    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:20:14.868401    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:20:14.879534    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:20:14.879544    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:20:14.891532    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:14.891547    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:20:14.908348    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:14.908446    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:14.925620    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:20:14.925626    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:20:14.937424    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:20:14.937434    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:14.949737    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:14.949749    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:14.985378    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:20:14.985393    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:20:15.001866    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:20:15.001877    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:20:15.013795    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:15.013810    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:15.038935    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:15.038943    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:20:15.038970    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:20:15.038974    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:15.038978    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:15.038983    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:15.038985    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:25.042922    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:30.045192    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:30.045324    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:30.058077    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:20:30.058149    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:30.069362    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:20:30.069432    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:30.080525    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:20:30.080599    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:30.091932    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:20:30.091998    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:30.102609    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:20:30.102673    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:30.113069    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:20:30.113129    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:30.123419    3839 logs.go:276] 0 containers: []
	W0731 10:20:30.123431    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:30.123482    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:30.133880    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:20:30.133898    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:20:30.133903    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:20:30.151520    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:20:30.151531    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:20:30.165612    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:20:30.165624    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:20:30.184095    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:30.184104    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:30.259369    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:20:30.259380    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:20:30.271907    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:20:30.271921    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:20:30.287339    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:20:30.287353    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:20:30.299217    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:30.299227    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:20:30.316350    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:30.316445    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:30.333518    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:20:30.333525    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:20:30.345718    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:20:30.345729    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:20:30.357874    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:30.357889    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:30.383042    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:30.383050    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:30.387153    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:20:30.387160    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:20:30.401408    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:20:30.401418    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:20:30.427065    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:20:30.427078    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:30.446470    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:30.446484    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:20:30.446509    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:20:30.446513    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:30.446517    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:30.446521    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:30.446524    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:40.450490    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:45.451705    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:45.451925    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:45.468026    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:20:45.468097    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:45.481171    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:20:45.481240    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:45.492703    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:20:45.492770    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:45.503336    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:20:45.503409    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:45.513922    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:20:45.513979    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:45.524220    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:20:45.524284    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:45.534286    3839 logs.go:276] 0 containers: []
	W0731 10:20:45.534297    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:45.534354    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:45.544543    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:20:45.544562    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:45.544568    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:20:45.563893    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:45.563988    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:45.580959    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:20:45.580967    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:20:45.598727    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:20:45.598741    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:20:45.610112    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:20:45.610122    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:45.622280    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:20:45.622290    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:20:45.636448    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:20:45.636461    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:20:45.648568    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:20:45.648581    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:20:45.660380    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:20:45.660396    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:20:45.676183    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:20:45.676194    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:20:45.694950    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:45.694959    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:45.699467    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:45.699473    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:45.735189    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:45.735204    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:45.758758    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:20:45.758766    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:20:45.772096    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:20:45.772105    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:20:45.783785    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:20:45.783794    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:20:45.799428    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:45.799436    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:20:45.799465    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:20:45.799470    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:45.799473    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:45.799478    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:45.799481    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:55.802738    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:00.804951    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:00.805059    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:00.816670    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:21:00.816733    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:00.830844    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:21:00.830907    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:00.841364    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:21:00.841429    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:00.852220    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:21:00.852285    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:00.869761    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:21:00.869828    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:00.883884    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:21:00.883953    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:00.893903    3839 logs.go:276] 0 containers: []
	W0731 10:21:00.893914    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:00.893972    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:00.904332    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:21:00.904352    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:21:00.904357    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:21:00.917287    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:21:00.917300    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:21:00.928861    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:21:00.928875    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:21:00.941016    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:21:00.941026    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:21:00.958721    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:00.958733    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:00.964058    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:00.964067    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:01.001795    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:21:01.001809    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:21:01.022049    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:21:01.022060    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:21:01.039958    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:21:01.039968    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:21:01.054133    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:21:01.054146    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:21:01.067777    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:21:01.067787    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:21:01.079653    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:21:01.079663    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:21:01.094311    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:01.094325    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:01.117963    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:21:01.117973    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:01.129709    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:01.129719    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:21:01.146574    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:01.146667    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:01.163722    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:01.163730    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:21:01.163752    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:21:01.163756    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:01.163769    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:01.163772    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:01.163775    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:21:11.166503    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:16.168687    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:16.168799    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:16.180596    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:21:16.180673    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:16.191472    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:21:16.191549    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:16.202200    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:21:16.202273    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:16.212798    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:21:16.212862    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:16.222668    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:21:16.222736    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:16.232892    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:21:16.232964    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:16.244927    3839 logs.go:276] 0 containers: []
	W0731 10:21:16.244938    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:16.244994    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:16.255089    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:21:16.255105    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:21:16.255110    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:21:16.267056    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:16.267066    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:16.290331    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:16.290338    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:16.294827    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:21:16.294836    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:21:16.306842    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:21:16.306851    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:21:16.319017    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:21:16.319026    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:21:16.331072    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:16.331086    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:16.367528    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:21:16.367540    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:21:16.379650    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:21:16.379662    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:21:16.397256    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:21:16.397269    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:21:16.411708    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:21:16.411718    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:16.423293    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:16.423304    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:21:16.442243    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:16.442336    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:16.459770    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:21:16.459779    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:21:16.478033    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:21:16.478046    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:21:16.491057    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:21:16.491068    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:21:16.505267    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:16.505278    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:21:16.505306    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:21:16.505311    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:16.505317    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:16.505321    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:16.505326    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:21:26.507361    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:31.509645    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:31.509867    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:31.537205    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:21:31.537293    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:31.553129    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:21:31.553202    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:31.564760    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:21:31.564857    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:31.577695    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:21:31.577774    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:31.588426    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:21:31.588496    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:31.599106    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:21:31.599174    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:31.609379    3839 logs.go:276] 0 containers: []
	W0731 10:21:31.609392    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:31.609450    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:31.620017    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:21:31.620032    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:21:31.620038    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:21:31.631803    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:21:31.631815    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:21:31.647036    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:21:31.647046    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:21:31.664705    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:21:31.664715    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:31.676710    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:31.676720    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:31.719999    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:21:31.720019    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:21:31.741780    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:21:31.741792    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:21:31.754021    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:21:31.754032    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:21:31.765955    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:31.765965    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:31.771367    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:21:31.771377    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:21:31.788946    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:21:31.788958    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:21:31.803063    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:21:31.803074    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:21:31.819049    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:21:31.819061    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:21:31.830470    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:31.830481    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:21:31.849586    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:31.849682    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:31.867026    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:31.867037    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:31.890908    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:31.890915    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:21:31.890938    3839 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 10:21:31.890942    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:31.890946    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	  Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:31.890964    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:31.890969    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:21:41.894485    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:46.896741    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:46.900795    3839 out.go:177] 
	W0731 10:21:46.904870    3839 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 10:21:46.904881    3839 out.go:239] * 
	* 
	W0731 10:21:46.905858    3839 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:21:46.915733    3839 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-323000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-31 10:21:47.005937 -0700 PDT m=+2561.982321251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-323000 -n running-upgrade-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-323000 -n running-upgrade-323000: exit status 2 (15.529186833s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-323000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo cat                            | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo                                | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo find                           | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-336000 sudo crio                           | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-336000                                     | cilium-336000             | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT | 31 Jul 24 10:12 PDT |
	| start   | -p kubernetes-upgrade-013000                         | kubernetes-upgrade-013000 | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-290000                             | offline-docker-290000     | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT | 31 Jul 24 10:12 PDT |
	| start   | -p stopped-upgrade-608000                            | minikube                  | jenkins | v1.26.0 | 31 Jul 24 10:12 PDT | 31 Jul 24 10:13 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-013000                         | kubernetes-upgrade-013000 | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT | 31 Jul 24 10:12 PDT |
	| start   | -p kubernetes-upgrade-013000                         | kubernetes-upgrade-013000 | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-013000                         | kubernetes-upgrade-013000 | jenkins | v1.33.1 | 31 Jul 24 10:12 PDT | 31 Jul 24 10:12 PDT |
	| start   | -p running-upgrade-323000                            | minikube                  | jenkins | v1.26.0 | 31 Jul 24 10:12 PDT | 31 Jul 24 10:13 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p running-upgrade-323000                            | running-upgrade-323000    | jenkins | v1.33.1 | 31 Jul 24 10:13 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-608000 stop                          | minikube                  | jenkins | v1.26.0 | 31 Jul 24 10:13 PDT | 31 Jul 24 10:13 PDT |
	| start   | -p stopped-upgrade-608000                            | stopped-upgrade-608000    | jenkins | v1.33.1 | 31 Jul 24 10:13 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 10:13:47
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:13:47.144531    3892 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:13:47.144712    3892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:13:47.144716    3892 out.go:304] Setting ErrFile to fd 2...
	I0731 10:13:47.144718    3892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:13:47.144846    3892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:13:47.145803    3892 out.go:298] Setting JSON to false
	I0731 10:13:47.164069    3892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2591,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:13:47.164149    3892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:13:47.168986    3892 out.go:177] * [stopped-upgrade-608000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:13:47.177148    3892 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:13:47.177211    3892 notify.go:220] Checking for updates...
	I0731 10:13:47.185129    3892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:13:47.186393    3892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:13:47.189111    3892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:13:47.192133    3892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:13:47.195139    3892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:13:47.198441    3892 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:13:47.202092    3892 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 10:13:47.205968    3892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:13:47.210177    3892 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:13:47.220115    3892 start.go:297] selected driver: qemu2
	I0731 10:13:47.220127    3892 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:13:47.220192    3892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:13:47.223128    3892 cni.go:84] Creating CNI manager for ""
	I0731 10:13:47.223150    3892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:13:47.223174    3892 start.go:340] cluster config:
	{Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:13:47.223235    3892 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:13:47.228202    3892 out.go:177] * Starting "stopped-upgrade-608000" primary control-plane node in "stopped-upgrade-608000" cluster
	I0731 10:13:47.236101    3892 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 10:13:47.236122    3892 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 10:13:47.236137    3892 cache.go:56] Caching tarball of preloaded images
	I0731 10:13:47.236214    3892 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:13:47.236221    3892 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 10:13:47.236289    3892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/config.json ...
	I0731 10:13:47.236582    3892 start.go:360] acquireMachinesLock for stopped-upgrade-608000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:13:47.236616    3892 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "stopped-upgrade-608000"
	I0731 10:13:47.236625    3892 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:13:47.236631    3892 fix.go:54] fixHost starting: 
	I0731 10:13:47.236744    3892 fix.go:112] recreateIfNeeded on stopped-upgrade-608000: state=Stopped err=<nil>
	W0731 10:13:47.236752    3892 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:13:47.245134    3892 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-608000" ...
	I0731 10:13:44.747496    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:44.747544    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:47.248120    3892 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:13:47.248190    3892 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50281-:22,hostfwd=tcp::50282-:2376,hostname=stopped-upgrade-608000 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/disk.qcow2
	I0731 10:13:47.294470    3892 main.go:141] libmachine: STDOUT: 
	I0731 10:13:47.294496    3892 main.go:141] libmachine: STDERR: 
	I0731 10:13:47.294502    3892 main.go:141] libmachine: Waiting for VM to start (ssh -p 50281 docker@127.0.0.1)...
	I0731 10:13:49.748067    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:49.748150    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:54.749040    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:54.749078    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:13:59.749965    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:13:59.750015    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:04.751239    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:04.751316    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:07.314987    3892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/config.json ...
	I0731 10:14:07.315567    3892 machine.go:94] provisionDockerMachine start ...
	I0731 10:14:07.315709    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.316056    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.316069    3892 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 10:14:07.388054    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 10:14:07.388082    3892 buildroot.go:166] provisioning hostname "stopped-upgrade-608000"
	I0731 10:14:07.388212    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.388432    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.388443    3892 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-608000 && echo "stopped-upgrade-608000" | sudo tee /etc/hostname
	I0731 10:14:07.457818    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-608000
	
	I0731 10:14:07.457886    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.458023    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.458034    3892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-608000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-608000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-608000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 10:14:07.518569    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 10:14:07.518584    3892 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19349-1049/.minikube CaCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19349-1049/.minikube}
	I0731 10:14:07.518594    3892 buildroot.go:174] setting up certificates
	I0731 10:14:07.518599    3892 provision.go:84] configureAuth start
	I0731 10:14:07.518608    3892 provision.go:143] copyHostCerts
	I0731 10:14:07.518685    3892 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem, removing ...
	I0731 10:14:07.518692    3892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem
	I0731 10:14:07.518804    3892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem (1123 bytes)
	I0731 10:14:07.519011    3892 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem, removing ...
	I0731 10:14:07.519015    3892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem
	I0731 10:14:07.519078    3892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem (1679 bytes)
	I0731 10:14:07.519203    3892 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem, removing ...
	I0731 10:14:07.519207    3892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem
	I0731 10:14:07.519269    3892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem (1082 bytes)
	I0731 10:14:07.519375    3892 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-608000 san=[127.0.0.1 localhost minikube stopped-upgrade-608000]
	I0731 10:14:07.663867    3892 provision.go:177] copyRemoteCerts
	I0731 10:14:07.663911    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 10:14:07.663920    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:14:07.694711    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 10:14:07.701534    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 10:14:07.708352    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 10:14:07.715722    3892 provision.go:87] duration metric: took 197.121125ms to configureAuth
	I0731 10:14:07.715730    3892 buildroot.go:189] setting minikube options for container-runtime
	I0731 10:14:07.715849    3892 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:14:07.715882    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.715967    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.715973    3892 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 10:14:07.770941    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 10:14:07.770949    3892 buildroot.go:70] root file system type: tmpfs
	I0731 10:14:07.770997    3892 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 10:14:07.771041    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.771142    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.771174    3892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 10:14:07.832064    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 10:14:07.832119    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.832234    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.832245    3892 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 10:14:08.177276    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 10:14:08.177292    3892 machine.go:97] duration metric: took 861.729625ms to provisionDockerMachine
	I0731 10:14:08.177298    3892 start.go:293] postStartSetup for "stopped-upgrade-608000" (driver="qemu2")
	I0731 10:14:08.177305    3892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 10:14:08.177361    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 10:14:08.177369    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:14:08.208679    3892 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 10:14:08.210051    3892 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 10:14:08.210060    3892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19349-1049/.minikube/addons for local assets ...
	I0731 10:14:08.210146    3892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19349-1049/.minikube/files for local assets ...
	I0731 10:14:08.210273    3892 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem -> 15522.pem in /etc/ssl/certs
	I0731 10:14:08.210408    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 10:14:08.212952    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem --> /etc/ssl/certs/15522.pem (1708 bytes)
	I0731 10:14:08.220166    3892 start.go:296] duration metric: took 42.863417ms for postStartSetup
	I0731 10:14:08.220178    3892 fix.go:56] duration metric: took 20.983913s for fixHost
	I0731 10:14:08.220207    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:08.220313    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:08.220321    3892 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 10:14:08.274397    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722446048.222363337
	
	I0731 10:14:08.274404    3892 fix.go:216] guest clock: 1722446048.222363337
	I0731 10:14:08.274408    3892 fix.go:229] Guest: 2024-07-31 10:14:08.222363337 -0700 PDT Remote: 2024-07-31 10:14:08.22018 -0700 PDT m=+21.094966668 (delta=2.183337ms)
	I0731 10:14:08.274417    3892 fix.go:200] guest clock delta is within tolerance: 2.183337ms
	I0731 10:14:08.274421    3892 start.go:83] releasing machines lock for "stopped-upgrade-608000", held for 21.038163667s
	I0731 10:14:08.274486    3892 ssh_runner.go:195] Run: cat /version.json
	I0731 10:14:08.274497    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:14:08.274487    3892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 10:14:08.274535    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	W0731 10:14:08.275041    3892 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50281: connect: connection refused
	I0731 10:14:08.275064    3892 retry.go:31] will retry after 148.398824ms: dial tcp [::1]:50281: connect: connection refused
	W0731 10:14:08.301543    3892 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 10:14:08.301595    3892 ssh_runner.go:195] Run: systemctl --version
	I0731 10:14:08.303515    3892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 10:14:08.305074    3892 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 10:14:08.305098    3892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 10:14:08.307916    3892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 10:14:08.312210    3892 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 10:14:08.312219    3892 start.go:495] detecting cgroup driver to use...
	I0731 10:14:08.312297    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:14:08.319449    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 10:14:08.322363    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 10:14:08.325132    3892 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 10:14:08.325151    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 10:14:08.328555    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:14:08.331927    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 10:14:08.335345    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:14:08.338319    3892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 10:14:08.341013    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 10:14:08.344314    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 10:14:08.347934    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 10:14:08.351318    3892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 10:14:08.353806    3892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 10:14:08.356717    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:08.412180    3892 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 10:14:08.419063    3892 start.go:495] detecting cgroup driver to use...
	I0731 10:14:08.419130    3892 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 10:14:08.424421    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 10:14:08.432955    3892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 10:14:08.443408    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 10:14:08.448510    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 10:14:08.453429    3892 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 10:14:08.496249    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 10:14:08.502346    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:14:08.507851    3892 ssh_runner.go:195] Run: which cri-dockerd
	I0731 10:14:08.509022    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 10:14:08.511623    3892 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 10:14:08.516627    3892 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 10:14:08.600541    3892 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 10:14:08.671613    3892 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 10:14:08.671670    3892 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 10:14:08.677027    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:08.757025    3892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 10:14:09.909817    3892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152793542s)
	I0731 10:14:09.909881    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 10:14:09.914271    3892 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 10:14:09.919075    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 10:14:09.923445    3892 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 10:14:09.991823    3892 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 10:14:10.050754    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:10.112700    3892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 10:14:10.119041    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 10:14:10.123229    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:10.195092    3892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 10:14:10.233498    3892 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 10:14:10.233565    3892 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 10:14:10.235479    3892 start.go:563] Will wait 60s for crictl version
	I0731 10:14:10.235520    3892 ssh_runner.go:195] Run: which crictl
	I0731 10:14:10.237065    3892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 10:14:10.251887    3892 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 10:14:10.251952    3892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 10:14:10.267858    3892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 10:14:10.292737    3892 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 10:14:10.292848    3892 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 10:14:10.294093    3892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:14:10.297557    3892 kubeadm.go:883] updating cluster {Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 10:14:10.297601    3892 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 10:14:10.297640    3892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 10:14:10.308124    3892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 10:14:10.308131    3892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 10:14:10.308170    3892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 10:14:10.311717    3892 ssh_runner.go:195] Run: which lz4
	I0731 10:14:10.312964    3892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 10:14:10.314186    3892 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 10:14:10.314195    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 10:14:11.267551    3892 docker.go:649] duration metric: took 954.633375ms to copy over tarball
	I0731 10:14:11.267608    3892 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 10:14:09.753366    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:09.753433    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:12.425256    3892 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157652584s)
	I0731 10:14:12.425269    3892 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 10:14:12.440851    3892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 10:14:12.444498    3892 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 10:14:12.449627    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:12.520585    3892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 10:14:14.034728    3892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.514148666s)
	I0731 10:14:14.034821    3892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 10:14:14.048658    3892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 10:14:14.048668    3892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 10:14:14.048673    3892 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 10:14:14.053657    3892 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.055376    3892 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.057366    3892 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.057559    3892 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.059224    3892 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.059374    3892 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.061369    3892 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.061490    3892 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.062235    3892 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 10:14:14.063742    3892 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.063968    3892 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.064060    3892 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.065352    3892 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.065776    3892 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 10:14:14.067251    3892 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.068306    3892 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.455326    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.467721    3892 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 10:14:14.467747    3892 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.467807    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.477666    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.480743    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 10:14:14.487826    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.490464    3892 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 10:14:14.490482    3892 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.490517    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.501707    3892 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 10:14:14.501729    3892 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.501787    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.506333    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 10:14:14.512157    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 10:14:14.516674    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.518949    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 10:14:14.528864    3892 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 10:14:14.528889    3892 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.528941    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.537196    3892 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 10:14:14.537221    3892 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 10:14:14.537269    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 10:14:14.544175    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 10:14:14.544298    3892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 10:14:14.549905    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 10:14:14.549936    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 10:14:14.549946    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 10:14:14.550008    3892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 10:14:14.551699    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 10:14:14.551718    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0731 10:14:14.570839    3892 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 10:14:14.571011    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.580787    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.582889    3892 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 10:14:14.582900    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 10:14:14.609775    3892 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 10:14:14.609795    3892 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.609773    3892 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 10:14:14.609838    3892 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.609848    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.609914    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.660129    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 10:14:14.660129    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 10:14:14.660226    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 10:14:14.660333    3892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 10:14:14.675191    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 10:14:14.675220    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0731 10:14:14.694213    3892 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 10:14:14.694332    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.730488    3892 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 10:14:14.730510    3892 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.730572    3892 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.779734    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 10:14:14.779870    3892 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 10:14:14.783654    3892 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 10:14:14.783664    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 10:14:14.785174    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 10:14:14.785200    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 10:14:14.897988    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 10:14:14.898016    3892 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 10:14:14.898024    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 10:14:15.194002    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 10:14:15.194027    3892 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 10:14:15.194033    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 10:14:15.348595    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 10:14:15.348632    3892 cache_images.go:92] duration metric: took 1.299975125s to LoadCachedImages
	W0731 10:14:15.348676    3892 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0731 10:14:15.348683    3892 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 10:14:15.348743    3892 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-608000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 10:14:15.348808    3892 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 10:14:15.362515    3892 cni.go:84] Creating CNI manager for ""
	I0731 10:14:15.362526    3892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:14:15.362532    3892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 10:14:15.362541    3892 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-608000 NodeName:stopped-upgrade-608000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 10:14:15.362607    3892 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-608000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 10:14:15.362659    3892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 10:14:15.365373    3892 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 10:14:15.365399    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 10:14:15.368416    3892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 10:14:15.373306    3892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 10:14:15.378264    3892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 10:14:15.383522    3892 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 10:14:15.384712    3892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:14:15.388467    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:15.451870    3892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 10:14:15.458356    3892 certs.go:68] Setting up /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000 for IP: 10.0.2.15
	I0731 10:14:15.458366    3892 certs.go:194] generating shared ca certs ...
	I0731 10:14:15.458374    3892 certs.go:226] acquiring lock for ca certs: {Name:mkb79625058c6882a075e84437e973267a00b06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.458528    3892 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.key
	I0731 10:14:15.458582    3892 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.key
	I0731 10:14:15.458590    3892 certs.go:256] generating profile certs ...
	I0731 10:14:15.458664    3892 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.key
	I0731 10:14:15.458681    3892 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f
	I0731 10:14:15.458691    3892 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 10:14:15.589394    3892 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f ...
	I0731 10:14:15.589410    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f: {Name:mkdac14c68f1209e9af466ff8479005de0539e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.589927    3892 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f ...
	I0731 10:14:15.589933    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f: {Name:mkab18db9ee2ced4e2fc9a122561b6f64ffd5956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.590106    3892 certs.go:381] copying /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f -> /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt
	I0731 10:14:15.590271    3892 certs.go:385] copying /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f -> /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key
	I0731 10:14:15.590419    3892 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/proxy-client.key
	I0731 10:14:15.590562    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552.pem (1338 bytes)
	W0731 10:14:15.590593    3892 certs.go:480] ignoring /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552_empty.pem, impossibly tiny 0 bytes
	I0731 10:14:15.590598    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 10:14:15.590618    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem (1082 bytes)
	I0731 10:14:15.590637    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem (1123 bytes)
	I0731 10:14:15.590657    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem (1679 bytes)
	I0731 10:14:15.590697    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem (1708 bytes)
	I0731 10:14:15.591049    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 10:14:15.598008    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 10:14:15.604568    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 10:14:15.611913    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 10:14:15.619320    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 10:14:15.626313    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 10:14:15.632839    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 10:14:15.640019    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 10:14:15.647475    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552.pem --> /usr/share/ca-certificates/1552.pem (1338 bytes)
	I0731 10:14:15.654275    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem --> /usr/share/ca-certificates/15522.pem (1708 bytes)
	I0731 10:14:15.660757    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 10:14:15.668127    3892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 10:14:15.674677    3892 ssh_runner.go:195] Run: openssl version
	I0731 10:14:15.676539    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1552.pem && ln -fs /usr/share/ca-certificates/1552.pem /etc/ssl/certs/1552.pem"
	I0731 10:14:15.679862    3892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1552.pem
	I0731 10:14:15.681398    3892 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:45 /usr/share/ca-certificates/1552.pem
	I0731 10:14:15.681419    3892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1552.pem
	I0731 10:14:15.683319    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1552.pem /etc/ssl/certs/51391683.0"
	I0731 10:14:15.686139    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15522.pem && ln -fs /usr/share/ca-certificates/15522.pem /etc/ssl/certs/15522.pem"
	I0731 10:14:15.689120    3892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15522.pem
	I0731 10:14:15.690658    3892 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:45 /usr/share/ca-certificates/15522.pem
	I0731 10:14:15.690678    3892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15522.pem
	I0731 10:14:15.692416    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15522.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 10:14:15.695869    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 10:14:15.699162    3892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:14:15.700610    3892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:39 /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:14:15.700630    3892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:14:15.702397    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 10:14:15.705205    3892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 10:14:15.706707    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 10:14:15.708588    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 10:14:15.710551    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 10:14:15.712446    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 10:14:15.714393    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 10:14:15.716185    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 10:14:15.718132    3892 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:14:15.718200    3892 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 10:14:15.728678    3892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 10:14:15.732149    3892 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 10:14:15.732159    3892 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 10:14:15.732188    3892 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 10:14:15.734988    3892 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 10:14:15.735303    3892 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-608000" does not appear in /Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:14:15.735400    3892 kubeconfig.go:62] /Users/jenkins/minikube-integration/19349-1049/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-608000" cluster setting kubeconfig missing "stopped-upgrade-608000" context setting]
	I0731 10:14:15.735595    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/kubeconfig: {Name:mk7aa7304788739007dd5984d3ae0414f5535a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.736310    3892 kapi.go:59] client config for stopped-upgrade-608000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.key", CAFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105cb81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:14:15.736655    3892 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 10:14:15.739504    3892 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-608000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 10:14:15.739509    3892 kubeadm.go:1160] stopping kube-system containers ...
	I0731 10:14:15.739547    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 10:14:15.754597    3892 docker.go:483] Stopping containers: [a2950ef315b5 7b8c410172e6 5892fd1585da a8cae9097f6b d04cca0c827c 150271b28573 c224a139058d 7dddf93acbff]
	I0731 10:14:15.754659    3892 ssh_runner.go:195] Run: docker stop a2950ef315b5 7b8c410172e6 5892fd1585da a8cae9097f6b d04cca0c827c 150271b28573 c224a139058d 7dddf93acbff
	I0731 10:14:15.765447    3892 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 10:14:15.771186    3892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:14:15.773980    3892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:14:15.773988    3892 kubeadm.go:157] found existing configuration files:
	
	I0731 10:14:15.774014    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf
	I0731 10:14:15.776608    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 10:14:15.776631    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 10:14:15.779627    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf
	I0731 10:14:15.782175    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 10:14:15.782194    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 10:14:15.784757    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf
	I0731 10:14:15.787577    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 10:14:15.787597    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 10:14:15.790131    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf
	I0731 10:14:15.792625    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 10:14:15.792645    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 10:14:15.795654    3892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:14:15.798335    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:15.821289    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.388061    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.497697    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.519825    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.547263    3892 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:14:16.547348    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:14:17.049483    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:14:14.754063    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:14.754082    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:17.549439    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:14:17.554219    3892 api_server.go:72] duration metric: took 1.006977125s to wait for apiserver process to appear ...
	I0731 10:14:17.554227    3892 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:14:17.554236    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:19.756136    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:19.756166    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:22.556336    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:22.556382    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:24.758286    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:24.758330    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:27.557290    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:27.557335    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:29.761599    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:29.761788    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:14:29.778717    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:14:29.778806    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:14:29.792076    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:14:29.792148    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:14:29.804176    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:14:29.804234    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:14:29.814742    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:14:29.814807    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:14:29.824791    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:14:29.824856    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:14:29.836196    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:14:29.836262    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:14:29.845982    3839 logs.go:276] 0 containers: []
	W0731 10:14:29.846003    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:14:29.846054    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:14:29.856655    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:14:29.856676    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:14:29.856681    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:14:29.900234    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:29.900332    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:29.901893    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:14:29.901898    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:14:29.915608    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:14:29.915619    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:14:29.926388    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:14:29.926401    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:14:29.938084    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:14:29.938094    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:14:29.949437    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:14:29.949448    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:14:29.960568    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:14:29.960587    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:14:29.972365    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:14:29.972376    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:14:29.977362    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:14:29.977371    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:14:29.988569    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:14:29.988581    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:14:30.000365    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:14:30.000377    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:14:30.018174    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:14:30.018184    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:14:30.089194    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:14:30.089207    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:14:30.107925    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:14:30.107943    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:14:30.127351    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:14:30.127365    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:14:30.138539    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:14:30.138551    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:14:30.150940    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:14:30.150950    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:14:30.176624    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:30.176632    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:14:30.176661    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:14:30.176665    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:30.176669    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:30.176676    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:30.176678    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:14:32.558862    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:32.558905    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:37.560242    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:37.560312    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:40.182279    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:42.561931    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:42.561973    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:45.185110    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:45.185449    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:14:45.223499    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:14:45.223637    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:14:45.252301    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:14:45.252391    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:14:45.265759    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:14:45.265834    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:14:45.277439    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:14:45.277517    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:14:45.288207    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:14:45.288273    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:14:45.299464    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:14:45.299539    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:14:45.309856    3839 logs.go:276] 0 containers: []
	W0731 10:14:45.309868    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:14:45.309929    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:14:45.321273    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:14:45.321291    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:14:45.321296    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:14:45.348196    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:14:45.348204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:14:45.360271    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:14:45.360284    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:14:45.379191    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:14:45.379204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:14:45.390778    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:14:45.390791    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:14:45.407033    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:14:45.407047    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:14:45.421600    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:14:45.421611    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:14:45.439347    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:14:45.439359    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:14:45.450865    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:14:45.450875    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:14:45.468909    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:14:45.468922    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:14:45.485745    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:14:45.485756    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:14:45.497441    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:14:45.497453    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:14:45.540229    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:45.540322    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:45.541826    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:14:45.541831    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:14:45.579590    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:14:45.579601    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:14:45.591822    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:14:45.591834    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:14:45.609282    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:14:45.609292    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:14:45.613718    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:14:45.613728    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:14:45.625910    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:45.625921    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:14:45.625949    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:14:45.625956    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:14:45.625962    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:14:45.625966    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:14:45.625970    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:14:47.563413    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:47.563496    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:52.565268    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:52.565306    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:55.630642    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:57.567334    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:57.567408    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:00.632559    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:00.632832    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:00.659861    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:00.659976    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:00.678693    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:00.678773    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:00.692541    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:00.692612    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:00.704646    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:00.704723    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:00.714894    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:00.714961    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:00.725634    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:00.725702    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:00.735393    3839 logs.go:276] 0 containers: []
	W0731 10:15:00.735405    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:00.735453    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:00.746143    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:00.746160    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:00.746165    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:00.761629    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:00.761640    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:00.777308    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:00.777318    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:00.788846    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:00.788858    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:00.793380    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:00.793388    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:00.804572    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:00.804585    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:00.832502    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:00.832511    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:00.875318    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:00.875330    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:00.889777    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:00.889788    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:00.907289    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:00.907302    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:00.918385    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:00.918400    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:00.931028    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:00.931041    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:00.975420    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:00.975515    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:00.977112    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:00.977121    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:00.988879    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:00.988888    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:01.000147    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:01.000161    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:01.011584    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:01.011597    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:01.025310    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:01.025320    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:01.036311    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:01.036324    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:01.036352    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:15:01.036357    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:01.036360    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:01.036364    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:01.036367    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:02.570022    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:02.570101    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:07.572585    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:07.572631    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:11.039630    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:12.574774    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:12.574848    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:16.041960    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:16.042148    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:16.061192    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:16.061291    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:16.076544    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:16.076634    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:16.088716    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:16.088788    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:16.099551    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:16.099619    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:16.110093    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:16.110192    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:16.120889    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:16.120955    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:16.131261    3839 logs.go:276] 0 containers: []
	W0731 10:15:16.131271    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:16.131331    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:16.141674    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:16.141690    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:16.141695    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:16.157661    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:16.157671    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:16.176396    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:16.176411    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:16.187464    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:16.187477    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:16.199418    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:16.199427    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:16.242453    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:16.242547    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:16.244155    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:16.244159    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:16.255086    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:16.255099    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:16.266439    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:16.266453    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:16.278038    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:16.278050    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:16.304228    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:16.304236    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:16.308265    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:16.308272    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:16.323888    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:16.323900    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:16.335307    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:16.335318    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:16.352788    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:16.352799    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:16.388267    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:16.388278    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:16.403989    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:16.404000    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:16.417786    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:16.417797    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:16.429644    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:16.429656    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:16.429685    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:15:16.429689    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:16.429692    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:16.429697    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:16.429699    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:17.577348    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:17.577568    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:17.606953    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:17.607072    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:17.623872    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:17.623959    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:17.636886    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:17.636963    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:17.647824    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:17.647902    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:17.658482    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:17.658562    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:17.669098    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:17.669169    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:17.679232    3892 logs.go:276] 0 containers: []
	W0731 10:15:17.679242    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:17.679295    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:17.689328    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:17.689347    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:17.689355    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:17.700151    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:17.700161    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:17.714279    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:17.714293    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:17.725461    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:17.725473    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:17.740551    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:17.740563    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:17.758333    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:17.758344    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:17.771058    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:17.771068    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:17.775665    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:17.775673    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:17.854774    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:17.854788    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:17.871545    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:17.871557    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:17.882804    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:17.882814    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:17.921821    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:17.921831    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:17.935033    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:17.935044    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:17.959254    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:17.959267    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:18.001274    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:18.001285    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:18.021779    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:18.021790    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:18.036739    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:18.036752    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:20.550525    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:25.552904    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:25.553137    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:25.573763    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:25.573869    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:25.589605    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:25.589696    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:25.602065    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:25.602130    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:25.612394    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:25.612466    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:25.628606    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:25.628683    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:25.638963    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:25.639029    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:25.649272    3892 logs.go:276] 0 containers: []
	W0731 10:15:25.649283    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:25.649345    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:25.659715    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:25.659733    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:25.659738    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:25.671189    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:25.671201    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:25.675679    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:25.675687    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:25.690498    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:25.690509    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:25.702828    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:25.702842    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:25.728572    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:25.728581    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:25.742552    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:25.742563    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:25.780568    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:25.780578    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:25.791820    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:25.791833    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:25.802809    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:25.802820    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:25.814446    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:25.814457    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:25.852431    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:25.852441    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:25.867301    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:25.867311    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:25.879333    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:25.879342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:25.904259    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:25.904269    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:25.917109    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:25.917117    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:25.955598    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:25.955611    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:26.433759    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:28.471994    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:31.434518    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:31.434712    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:31.451152    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:31.451243    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:31.464204    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:31.464276    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:31.475653    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:31.475720    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:31.486304    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:31.486375    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:31.496400    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:31.496466    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:31.506773    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:31.506844    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:31.517388    3839 logs.go:276] 0 containers: []
	W0731 10:15:31.517399    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:31.517457    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:31.528135    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:31.528153    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:31.528158    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:31.540443    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:31.540453    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:31.556996    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:31.557007    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:31.568401    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:31.568411    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:31.593523    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:31.593544    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:31.604609    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:31.604627    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:31.616528    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:31.616545    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:31.659799    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:31.659892    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:31.661414    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:31.661419    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:31.696226    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:31.696236    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:31.711543    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:31.711553    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:31.723154    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:31.723164    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:31.736795    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:31.736809    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:31.748296    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:31.748307    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:31.752759    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:31.752765    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:31.765287    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:31.765299    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:31.782658    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:31.782675    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:31.794526    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:31.794537    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:31.808708    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:31.808718    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:31.808750    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:15:31.808756    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:31.808771    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:31.808776    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:31.808780    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:33.474293    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:33.474497    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:33.494168    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:33.494258    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:33.507484    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:33.507556    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:33.519289    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:33.519357    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:33.530382    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:33.530457    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:33.540724    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:33.540788    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:33.552186    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:33.552259    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:33.562374    3892 logs.go:276] 0 containers: []
	W0731 10:15:33.562384    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:33.562445    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:33.575656    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:33.575675    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:33.575693    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:33.579759    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:33.579766    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:33.616269    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:33.616278    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:33.627403    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:33.627420    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:33.638566    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:33.638577    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:33.675519    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:33.675526    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:33.711286    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:33.711299    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:33.725765    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:33.725776    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:33.737867    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:33.737878    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:33.754928    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:33.754939    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:33.766893    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:33.766905    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:33.784023    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:33.784032    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:33.802920    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:33.802930    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:33.817289    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:33.817298    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:33.832786    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:33.832797    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:33.845504    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:33.845514    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:33.857080    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:33.857090    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:36.384585    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:41.386929    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:41.387078    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:41.401553    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:41.401637    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:41.413373    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:41.413440    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:41.424155    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:41.424220    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:41.439269    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:41.439337    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:41.449261    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:41.449322    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:41.459676    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:41.459745    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:41.469274    3892 logs.go:276] 0 containers: []
	W0731 10:15:41.469285    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:41.469337    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:41.479987    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:41.480004    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:41.480010    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:41.493075    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:41.493085    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:41.504331    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:41.504342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:41.518727    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:41.518739    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:41.555632    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:41.555645    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:41.593121    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:41.593135    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:41.606813    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:41.606827    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:41.618573    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:41.618584    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:41.643281    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:41.643289    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:41.655253    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:41.655264    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:41.667220    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:41.667232    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:41.705462    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:41.705470    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:41.716788    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:41.716801    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:41.731641    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:41.731651    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:41.748837    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:41.748850    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:41.760758    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:41.760771    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:41.765567    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:41.765576    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:41.812194    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:44.281275    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:46.814408    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:46.814613    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:46.828983    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:15:46.829062    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:46.840944    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:15:46.841016    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:46.851158    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:15:46.851224    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:46.865645    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:15:46.865710    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:46.876366    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:15:46.876433    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:46.887239    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:15:46.887314    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:46.897187    3839 logs.go:276] 0 containers: []
	W0731 10:15:46.897199    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:46.897256    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:46.907741    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:15:46.907759    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:46.907765    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:46.945427    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:15:46.945437    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:15:46.959651    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:15:46.959663    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:15:46.973773    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:15:46.973784    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:15:46.984938    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:15:46.984950    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:15:46.996242    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:46.996252    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:47.000576    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:15:47.000582    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:15:47.011986    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:15:47.012000    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:15:47.029257    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:15:47.029268    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:15:47.040705    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:15:47.040715    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:15:47.051827    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:47.051837    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:47.077557    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:15:47.077565    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:15:47.088790    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:15:47.088801    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:47.100349    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:15:47.100359    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:15:47.111895    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:15:47.111905    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:15:47.123666    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:15:47.123676    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:15:47.142833    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:47.142849    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:15:47.186829    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:47.186932    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:47.188553    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:47.188559    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:15:47.188591    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:15:47.188600    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:15:47.188604    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:15:47.188606    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:15:47.188609    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:15:49.283549    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:49.283741    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:49.301217    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:49.301307    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:49.314982    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:49.315057    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:49.326459    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:49.326528    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:49.337268    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:49.337348    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:49.350480    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:49.350553    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:49.364429    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:49.364501    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:49.378428    3892 logs.go:276] 0 containers: []
	W0731 10:15:49.378440    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:49.378497    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:49.389296    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:49.389313    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:49.389318    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:49.403203    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:49.403214    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:49.414677    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:49.414688    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:49.452641    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:49.452654    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:49.470843    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:49.470855    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:49.485251    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:49.485262    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:49.497063    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:49.497074    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:49.508336    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:49.508346    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:49.535581    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:49.535590    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:49.539581    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:49.539588    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:49.553819    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:49.553829    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:49.566026    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:49.566037    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:49.583690    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:49.583699    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:49.595661    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:49.595671    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:49.634836    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:49.634845    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:49.674149    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:49.674159    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:49.685140    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:49.685150    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:52.199642    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:57.192693    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:57.201941    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:57.202204    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:57.240391    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:57.240534    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:57.262320    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:57.262423    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:57.283862    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:57.283939    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:57.296648    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:57.296718    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:57.307436    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:57.307511    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:57.319443    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:57.319519    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:57.329921    3892 logs.go:276] 0 containers: []
	W0731 10:15:57.329933    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:57.329992    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:57.340437    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:57.340453    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:57.340458    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:57.356316    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:57.356329    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:57.394298    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:57.394309    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:57.433737    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:57.433751    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:57.437959    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:57.437966    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:57.477812    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:57.477824    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:57.495669    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:57.495682    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:57.506764    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:57.506777    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:57.518223    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:57.518233    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:57.545261    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:57.545274    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:57.559622    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:57.559632    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:57.571739    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:57.571749    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:57.586864    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:57.586874    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:57.604108    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:57.604121    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:57.619747    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:57.619757    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:57.631507    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:57.631521    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:57.642942    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:57.642953    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:00.170351    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:02.195379    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:02.195652    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:02.226401    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:02.226527    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:02.243627    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:02.243716    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:02.256747    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:02.256824    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:02.267853    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:02.267918    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:02.278156    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:02.278227    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:02.289102    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:02.289171    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:02.298922    3839 logs.go:276] 0 containers: []
	W0731 10:16:02.298932    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:02.298984    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:02.309334    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:02.309352    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:02.309358    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:02.328102    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:02.328111    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:02.339928    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:02.339942    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:02.380278    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:02.380371    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:02.381876    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:02.381881    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:02.386496    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:02.386504    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:02.402754    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:02.402767    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:02.421670    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:02.421680    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:02.435821    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:02.435833    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:02.447689    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:02.447700    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:02.459340    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:02.459351    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:02.472369    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:02.472380    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:02.509244    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:02.509257    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:02.520652    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:02.520662    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:02.531596    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:02.531611    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:02.555350    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:02.555358    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:02.566477    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:02.566489    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:02.588977    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:02.588987    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:02.605809    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:02.605821    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:02.605850    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:16:02.605856    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:02.605860    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:02.605865    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:02.605869    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:05.172597    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:05.172802    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:05.194471    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:05.194605    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:05.210277    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:05.210365    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:05.222429    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:05.222506    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:05.233641    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:05.233711    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:05.244134    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:05.244198    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:05.255002    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:05.255078    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:05.265143    3892 logs.go:276] 0 containers: []
	W0731 10:16:05.265156    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:05.265215    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:05.275720    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:05.275738    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:05.275743    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:05.290624    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:05.290638    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:05.303676    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:05.303686    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:05.327508    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:05.327520    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:05.345458    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:05.345468    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:05.385873    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:05.385887    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:05.424256    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:05.424268    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:05.438463    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:05.438473    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:05.452167    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:05.452177    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:05.467390    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:05.467401    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:05.485389    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:05.485400    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:05.499799    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:05.499808    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:05.514256    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:05.514267    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:05.531656    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:05.531667    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:05.543137    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:05.543148    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:05.547736    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:05.547744    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:05.581348    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:05.581358    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:08.098041    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:12.609643    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:13.100686    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:13.101130    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:13.140595    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:13.140739    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:13.162036    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:13.162140    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:13.177472    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:13.177549    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:13.189779    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:13.189855    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:13.200657    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:13.200750    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:13.211540    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:13.211615    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:13.222340    3892 logs.go:276] 0 containers: []
	W0731 10:16:13.222351    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:13.222412    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:13.233276    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:13.233295    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:13.233301    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:13.251494    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:13.251504    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:13.265418    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:13.265429    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:13.277549    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:13.277563    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:13.289277    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:13.289289    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:13.304495    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:13.304506    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:13.341893    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:13.341904    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:13.346003    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:13.346009    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:13.380501    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:13.380515    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:13.418017    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:13.418031    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:13.438630    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:13.438641    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:13.450307    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:13.450320    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:13.466340    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:13.466352    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:13.483771    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:13.483782    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:13.496480    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:13.496493    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:13.507926    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:13.507936    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:13.531307    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:13.531314    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:16.044673    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:17.612252    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:17.612347    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:17.623408    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:17.623477    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:17.634202    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:17.634270    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:17.644629    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:17.644687    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:17.655520    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:17.655597    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:17.666168    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:17.666235    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:17.676373    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:17.676457    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:17.687125    3839 logs.go:276] 0 containers: []
	W0731 10:16:17.687137    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:17.687192    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:17.697440    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:17.697457    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:17.697462    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:17.709681    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:17.709696    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:17.721356    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:17.721369    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:17.733478    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:17.733494    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:17.744946    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:17.744961    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:17.786475    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:17.786567    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:17.788077    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:17.788101    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:17.799637    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:17.799648    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:17.811185    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:17.811195    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:17.836906    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:17.836917    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:17.849123    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:17.849135    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:17.860513    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:17.860529    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:17.871625    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:17.871637    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:17.885239    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:17.885248    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:17.897021    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:17.897035    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:17.942267    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:17.942281    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:17.956306    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:17.956314    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:17.973270    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:17.973284    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:17.978323    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:17.978330    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:17.978361    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:16:17.978366    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:17.978369    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:17.978373    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:17.978376    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:21.047285    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:21.047732    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:21.089357    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:21.089497    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:21.111029    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:21.111146    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:21.126649    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:21.126724    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:21.139630    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:21.139706    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:21.151032    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:21.151103    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:21.163490    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:21.163585    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:21.174764    3892 logs.go:276] 0 containers: []
	W0731 10:16:21.174777    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:21.174840    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:21.189055    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:21.189074    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:21.189081    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:21.193314    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:21.193320    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:21.205431    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:21.205441    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:21.217235    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:21.217248    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:21.234936    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:21.234946    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:21.248749    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:21.248760    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:21.285985    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:21.285994    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:21.300131    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:21.300141    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:21.340281    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:21.340291    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:21.354105    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:21.354115    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:21.365657    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:21.365668    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:21.380337    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:21.380347    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:21.391202    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:21.391213    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:21.403723    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:21.403733    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:21.445575    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:21.445586    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:21.460345    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:21.460355    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:21.485598    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:21.485610    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:24.010628    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:27.982379    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:29.013209    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:29.013495    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:29.046861    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:29.046990    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:29.066270    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:29.066363    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:29.080820    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:29.080900    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:29.096658    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:29.096732    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:29.106994    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:29.107061    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:29.117427    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:29.117500    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:29.127259    3892 logs.go:276] 0 containers: []
	W0731 10:16:29.127272    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:29.127329    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:29.137794    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:29.137813    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:29.137821    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:29.149788    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:29.149798    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:29.161142    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:29.161153    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:29.185047    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:29.185057    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:29.198018    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:29.198032    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:29.211928    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:29.211937    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:29.223489    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:29.223500    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:29.240572    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:29.240583    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:29.254312    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:29.254324    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:29.269088    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:29.269101    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:29.280177    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:29.280188    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:29.294808    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:29.294819    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:29.308582    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:29.308595    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:29.347159    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:29.347168    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:29.351528    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:29.351537    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:29.386797    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:29.386811    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:29.425065    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:29.425082    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:31.940895    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:32.984682    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:32.984795    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:32.996326    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:32.996401    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:33.011594    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:33.011667    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:33.022225    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:33.022293    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:33.035941    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:33.036011    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:33.046256    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:33.046318    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:33.056827    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:33.056891    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:33.067896    3839 logs.go:276] 0 containers: []
	W0731 10:16:33.067905    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:33.067954    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:33.078478    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:33.078496    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:33.078501    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:33.089807    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:33.089821    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:33.107293    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:33.107305    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:33.118642    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:33.118654    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:33.130613    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:33.130625    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:33.171684    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:33.171778    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:33.173340    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:33.173345    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:33.187596    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:33.187606    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:33.224359    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:33.224370    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:33.236146    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:33.236157    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:33.248010    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:33.248020    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:33.259960    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:33.259971    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:33.283932    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:33.283940    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:36.943185    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:36.943500    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:36.980876    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:36.980984    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:37.000513    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:37.000589    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:37.013391    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:37.013459    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:37.025700    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:37.025763    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:37.036013    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:37.036075    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:37.046669    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:37.046735    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:37.056891    3892 logs.go:276] 0 containers: []
	W0731 10:16:37.056904    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:37.056970    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:37.067688    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:37.067706    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:37.067712    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:37.081366    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:37.081379    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:37.095493    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:37.095506    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:37.109408    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:37.109419    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:37.121164    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:37.121178    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:33.288012    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:33.288018    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:33.307026    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:33.307040    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:33.322860    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:33.322875    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:33.342530    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:33.342541    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:33.354209    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:33.354222    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:33.365913    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:33.365923    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:33.365949    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:16:33.365955    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:33.365959    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:33.365963    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:33.365974    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:37.159892    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:37.159900    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:37.194251    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:37.194264    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:37.209282    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:37.209295    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:37.220692    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:37.220701    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:37.232040    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:37.232050    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:37.236528    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:37.236535    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:37.248760    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:37.248776    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:37.260923    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:37.260933    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:37.278909    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:37.278922    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:37.293574    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:37.293584    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:37.335040    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:37.335051    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:37.349986    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:37.349996    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:39.876648    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:44.878775    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:44.878929    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:44.893586    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:44.893665    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:44.905107    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:44.905176    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:44.915410    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:44.915473    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:44.926318    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:44.926388    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:44.940657    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:44.940719    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:44.951414    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:44.951474    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:44.961669    3892 logs.go:276] 0 containers: []
	W0731 10:16:44.961678    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:44.961726    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:44.972009    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:44.972026    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:44.972032    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:45.012441    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:45.012454    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:45.050014    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:45.050025    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:45.064217    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:45.064228    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:45.075967    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:45.075979    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:45.087548    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:45.087558    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:45.099635    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:45.099646    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:45.138044    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:45.138056    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:45.152349    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:45.152362    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:45.167326    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:45.167342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:45.185169    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:45.185181    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:45.197665    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:45.197677    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:45.208872    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:45.208882    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:45.212860    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:45.212870    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:45.226693    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:45.226703    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:45.238523    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:45.238533    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:45.250618    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:45.250628    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:43.368024    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:47.777364    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:48.369641    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:48.369945    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:48.418737    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:16:48.418863    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:48.435712    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:16:48.435795    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:48.448583    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:16:48.448661    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:48.459575    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:16:48.459650    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:48.470237    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:16:48.470304    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:48.483508    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:16:48.483577    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:48.493893    3839 logs.go:276] 0 containers: []
	W0731 10:16:48.493904    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:48.493963    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:48.504784    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:16:48.504800    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:48.504805    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:48.529389    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:16:48.529396    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:16:48.543491    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:16:48.543503    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:16:48.561374    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:16:48.561386    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:16:48.572870    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:16:48.572883    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:16:48.584634    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:16:48.584645    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:16:48.596889    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:16:48.596898    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:48.608826    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:48.608839    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:48.644657    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:16:48.644668    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:16:48.663689    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:16:48.663705    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:16:48.678142    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:16:48.678152    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:16:48.692217    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:16:48.692230    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:16:48.703869    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:16:48.703881    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:16:48.716044    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:16:48.716060    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:16:48.727704    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:48.727714    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:16:48.769252    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:48.769346    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:48.770926    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:48.770934    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:48.775657    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:16:48.775665    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:16:48.787694    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:48.787703    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:16:48.787728    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:16:48.787732    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:16:48.787735    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:16:48.787741    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:16:48.787744    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:16:52.779479    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:52.779657    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:52.803613    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:52.803698    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:52.815819    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:52.815888    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:52.827213    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:52.827284    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:52.837821    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:52.837895    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:52.848107    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:52.848169    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:52.858781    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:52.858854    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:52.868922    3892 logs.go:276] 0 containers: []
	W0731 10:16:52.868932    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:52.868987    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:52.879642    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:52.879661    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:52.879667    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:52.891752    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:52.891763    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:52.903204    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:52.903219    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:52.940164    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:52.940172    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:52.953724    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:52.953735    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:52.968466    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:52.968477    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:52.979703    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:52.979713    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:53.002491    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:53.002499    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:53.014950    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:53.014961    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:53.032383    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:53.032393    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:53.046512    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:53.046522    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:53.059612    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:53.059623    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:53.096083    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:53.096094    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:53.109359    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:53.109371    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:53.126784    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:53.126794    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:53.131254    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:53.131260    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:53.169486    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:53.169496    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:55.686159    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:00.688458    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:00.688662    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:00.708680    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:00.708772    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:00.723646    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:00.723724    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:00.736018    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:00.736090    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:00.746706    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:00.746763    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:00.757208    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:00.757274    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:00.767878    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:00.767943    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:00.781823    3892 logs.go:276] 0 containers: []
	W0731 10:17:00.781837    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:00.781891    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:00.792712    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:00.792742    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:00.792748    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:00.828155    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:00.828166    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:00.870557    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:00.870567    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:00.884261    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:00.884274    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:00.898642    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:00.898655    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:00.922371    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:00.922382    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:00.936038    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:00.936050    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:00.949299    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:00.949313    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:00.964137    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:00.964150    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:00.976527    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:00.976541    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:00.988468    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:00.988479    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:01.003506    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:01.003518    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:01.015688    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:01.015698    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:01.052319    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:01.052328    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:01.056138    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:01.056146    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:01.067118    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:01.067128    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:01.078385    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:01.078396    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:58.791732    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:03.605215    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:03.793338    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:03.793465    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:03.821214    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:17:03.821290    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:03.857690    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:17:03.857836    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:03.877321    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:17:03.877396    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:03.887686    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:17:03.887759    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:03.898760    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:17:03.898829    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:03.909349    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:17:03.909441    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:03.919201    3839 logs.go:276] 0 containers: []
	W0731 10:17:03.919210    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:03.919268    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:03.930597    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:17:03.930613    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:17:03.930619    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:17:03.945341    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:17:03.945350    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:03.957344    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:17:03.957357    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:17:03.968538    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:17:03.968552    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:17:03.979883    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:17:03.979898    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:17:03.991597    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:17:03.991611    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:17:04.003566    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:17:04.003575    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:17:04.014878    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:04.014890    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:04.019836    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:17:04.019842    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:17:04.034548    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:17:04.034556    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:17:04.046092    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:17:04.046102    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:17:04.063823    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:04.063834    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:04.087934    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:04.087942    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:17:04.130720    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:04.130819    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:04.132420    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:04.132425    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:04.168823    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:17:04.168836    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:17:04.180348    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:17:04.180362    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:17:04.191606    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:17:04.191617    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:17:04.205978    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:04.205991    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:17:04.206017    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:17:04.206021    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:04.206025    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:04.206028    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:04.206031    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:17:08.607441    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:08.607720    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:08.636895    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:08.637011    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:08.654198    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:08.654282    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:08.668169    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:08.668368    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:08.680504    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:08.680579    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:08.691000    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:08.691066    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:08.701768    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:08.701837    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:08.711818    3892 logs.go:276] 0 containers: []
	W0731 10:17:08.711828    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:08.711881    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:08.722165    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:08.722183    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:08.722188    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:08.738654    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:08.738664    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:08.752430    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:08.752444    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:08.766378    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:08.766387    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:08.778517    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:08.778528    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:08.793949    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:08.793965    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:08.810979    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:08.810994    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:08.824264    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:08.824274    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:08.863431    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:08.863443    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:08.901558    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:08.901571    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:08.912406    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:08.912421    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:08.949342    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:08.949355    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:08.964106    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:08.964119    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:08.975752    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:08.975762    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:08.999821    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:08.999829    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:09.011298    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:09.011309    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:09.015695    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:09.015703    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:11.531605    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:16.534290    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:16.534481    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:16.553448    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:16.553544    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:16.568494    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:16.568572    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:16.580931    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:16.581006    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:16.591536    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:16.591610    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:16.602077    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:16.602144    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:16.612306    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:16.612376    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:16.622663    3892 logs.go:276] 0 containers: []
	W0731 10:17:16.622674    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:16.622731    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:16.633215    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:16.633231    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:16.633238    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:16.653955    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:16.653969    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:16.670314    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:16.670326    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:16.674740    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:16.674750    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:16.686790    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:16.686804    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:16.701343    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:16.701352    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:16.714611    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:16.714620    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:16.732506    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:16.732516    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:16.747652    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:16.747662    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:16.786414    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:16.786421    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:16.820935    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:16.820952    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:16.865538    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:16.865548    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:16.879547    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:16.879562    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:16.903407    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:16.903416    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:16.915028    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:16.915037    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:16.929607    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:16.929617    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:16.947666    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:16.947676    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:14.210075    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:19.467576    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:19.212476    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:19.212836    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:19.245962    3839 logs.go:276] 2 containers: [5406bbc36270 7a0ac0249826]
	I0731 10:17:19.246107    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:19.266082    3839 logs.go:276] 2 containers: [38dd2c66af17 5f107d2cbaf4]
	I0731 10:17:19.266182    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:19.281567    3839 logs.go:276] 1 containers: [d5c9c87fe956]
	I0731 10:17:19.281649    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:19.293956    3839 logs.go:276] 2 containers: [c57c2129944d 9253f276c9cb]
	I0731 10:17:19.294030    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:19.305229    3839 logs.go:276] 1 containers: [13377e696e53]
	I0731 10:17:19.305299    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:19.316092    3839 logs.go:276] 2 containers: [ab3e827fe425 d76f08334aba]
	I0731 10:17:19.316154    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:19.327171    3839 logs.go:276] 0 containers: []
	W0731 10:17:19.327184    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:19.327240    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:19.338516    3839 logs.go:276] 2 containers: [12f8a390e248 cc14bb0626bb]
	I0731 10:17:19.338535    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:19.338542    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:19.343158    3839 logs.go:123] Gathering logs for etcd [38dd2c66af17] ...
	I0731 10:17:19.343167    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38dd2c66af17"
	I0731 10:17:19.357104    3839 logs.go:123] Gathering logs for coredns [d5c9c87fe956] ...
	I0731 10:17:19.357115    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5c9c87fe956"
	I0731 10:17:19.368978    3839 logs.go:123] Gathering logs for kube-scheduler [9253f276c9cb] ...
	I0731 10:17:19.368989    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9253f276c9cb"
	I0731 10:17:19.380327    3839 logs.go:123] Gathering logs for storage-provisioner [cc14bb0626bb] ...
	I0731 10:17:19.380340    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc14bb0626bb"
	I0731 10:17:19.392128    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:19.392140    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:17:19.436146    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:19.436246    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:19.437780    3839 logs.go:123] Gathering logs for kube-apiserver [5406bbc36270] ...
	I0731 10:17:19.437787    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5406bbc36270"
	I0731 10:17:19.455502    3839 logs.go:123] Gathering logs for kube-scheduler [c57c2129944d] ...
	I0731 10:17:19.455520    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c57c2129944d"
	I0731 10:17:19.467469    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:19.467482    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:19.489497    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:17:19.489507    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:19.500721    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:19.500734    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:19.534770    3839 logs.go:123] Gathering logs for kube-apiserver [7a0ac0249826] ...
	I0731 10:17:19.534780    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a0ac0249826"
	I0731 10:17:19.546154    3839 logs.go:123] Gathering logs for kube-proxy [13377e696e53] ...
	I0731 10:17:19.546167    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13377e696e53"
	I0731 10:17:19.558179    3839 logs.go:123] Gathering logs for kube-controller-manager [ab3e827fe425] ...
	I0731 10:17:19.558193    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3e827fe425"
	I0731 10:17:19.579779    3839 logs.go:123] Gathering logs for kube-controller-manager [d76f08334aba] ...
	I0731 10:17:19.579793    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76f08334aba"
	I0731 10:17:19.591482    3839 logs.go:123] Gathering logs for etcd [5f107d2cbaf4] ...
	I0731 10:17:19.591492    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f107d2cbaf4"
	I0731 10:17:19.602738    3839 logs.go:123] Gathering logs for storage-provisioner [12f8a390e248] ...
	I0731 10:17:19.602752    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12f8a390e248"
	I0731 10:17:19.616595    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:19.616607    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:17:19.616635    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:17:19.616639    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:17:19.616644    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:17:19.616650    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:17:19.616653    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:17:24.467810    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:24.468060    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:24.497062    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:24.497164    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:24.512877    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:24.512959    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:24.526165    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:24.526238    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:24.537255    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:24.537317    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:24.547793    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:24.547863    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:24.558235    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:24.558294    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:24.568881    3892 logs.go:276] 0 containers: []
	W0731 10:17:24.568893    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:24.568952    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:24.579605    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:24.579623    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:24.579629    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:24.583615    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:24.583622    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:24.620568    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:24.620578    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:24.632538    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:24.632549    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:24.649975    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:24.649985    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:24.686836    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:24.686843    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:24.722096    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:24.722108    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:24.737288    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:24.737297    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:24.752496    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:24.752507    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:24.763785    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:24.763795    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:24.775662    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:24.775673    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:24.788750    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:24.788762    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:24.800003    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:24.800012    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:24.810826    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:24.810838    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:24.833729    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:24.833738    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:24.847597    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:24.847607    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:24.861352    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:24.861362    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:27.378608    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:29.620670    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:34.622943    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:34.623018    3839 kubeadm.go:597] duration metric: took 4m7.790766708s to restartPrimaryControlPlane
	W0731 10:17:34.623064    3839 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 10:17:34.623086    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 10:17:35.616570    3839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:17:35.621528    3839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:17:35.624989    3839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:17:35.628034    3839 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:17:35.628038    3839 kubeadm.go:157] found existing configuration files:
	
	I0731 10:17:35.628060    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf
	I0731 10:17:35.630524    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 10:17:35.630542    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 10:17:35.633664    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf
	I0731 10:17:35.636998    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 10:17:35.637020    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 10:17:35.639688    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf
	I0731 10:17:35.642134    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 10:17:35.642151    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 10:17:35.645266    3839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf
	I0731 10:17:35.647889    3839 kubeadm.go:163] "https://control-plane.minikube.internal:50276" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50276 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 10:17:35.647905    3839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 10:17:35.650476    3839 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 10:17:35.666651    3839 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 10:17:35.666770    3839 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 10:17:35.717505    3839 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 10:17:35.717579    3839 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 10:17:35.717633    3839 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 10:17:35.766725    3839 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 10:17:35.770920    3839 out.go:204]   - Generating certificates and keys ...
	I0731 10:17:35.770959    3839 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 10:17:35.770993    3839 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 10:17:35.771032    3839 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 10:17:35.771063    3839 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 10:17:35.771098    3839 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 10:17:35.771128    3839 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 10:17:35.771168    3839 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 10:17:35.771212    3839 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 10:17:35.771248    3839 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 10:17:35.771290    3839 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 10:17:35.771313    3839 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 10:17:35.771342    3839 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 10:17:35.932322    3839 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 10:17:36.033553    3839 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 10:17:36.197483    3839 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 10:17:36.235885    3839 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 10:17:36.265829    3839 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 10:17:36.265878    3839 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 10:17:36.265921    3839 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 10:17:36.357052    3839 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 10:17:32.381339    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:32.381540    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:32.397030    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:32.397120    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:32.408943    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:32.409019    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:32.419554    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:32.419614    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:32.430317    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:32.430395    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:32.440780    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:32.440848    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:32.451210    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:32.451274    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:32.462868    3892 logs.go:276] 0 containers: []
	W0731 10:17:32.462883    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:32.462934    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:32.474898    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:32.474917    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:32.474923    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:32.486695    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:32.486706    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:32.499834    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:32.499849    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:32.513645    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:32.513658    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:32.528622    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:32.528632    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:32.539697    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:32.539709    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:32.559400    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:32.559413    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:32.597648    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:32.597657    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:32.602322    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:32.602329    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:32.637449    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:32.637461    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:32.649875    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:32.649889    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:32.665334    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:32.665347    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:32.683247    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:32.683259    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:32.706669    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:32.706676    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:32.717631    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:32.717642    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:32.757303    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:32.757314    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:32.769476    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:32.769487    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:35.282955    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:36.361274    3839 out.go:204]   - Booting up control plane ...
	I0731 10:17:36.361318    3839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 10:17:36.361356    3839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 10:17:36.361450    3839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 10:17:36.361700    3839 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 10:17:36.362487    3839 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 10:17:40.868256    3839 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504953 seconds
	I0731 10:17:40.868336    3839 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 10:17:40.871293    3839 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 10:17:41.397949    3839 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 10:17:41.398404    3839 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-323000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 10:17:41.902396    3839 kubeadm.go:310] [bootstrap-token] Using token: jjgv1s.cijpkg01pfwrwhoo
	I0731 10:17:40.285115    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:40.285268    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:40.299709    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:40.299787    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:40.312077    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:40.312142    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:40.323329    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:40.323399    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:40.334254    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:40.334379    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:40.345320    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:40.345393    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:40.356372    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:40.356442    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:40.366827    3892 logs.go:276] 0 containers: []
	W0731 10:17:40.366842    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:40.366903    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:40.377851    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:40.377867    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:40.377875    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:40.382514    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:40.382520    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:40.420688    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:40.420703    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:40.439749    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:40.439765    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:40.463396    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:40.463407    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:40.476629    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:40.476644    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:40.491355    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:40.491366    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:40.507673    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:40.507686    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:40.520564    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:40.520575    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:40.560910    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:40.560930    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:40.574085    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:40.574097    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:40.590618    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:40.590635    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:40.631519    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:40.631534    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:40.647300    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:40.647315    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:40.660715    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:40.660728    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:40.679227    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:40.679238    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:40.692820    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:40.692831    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:41.906647    3839 out.go:204]   - Configuring RBAC rules ...
	I0731 10:17:41.906737    3839 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 10:17:41.908993    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 10:17:41.914796    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 10:17:41.915792    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 10:17:41.917011    3839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 10:17:41.918078    3839 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 10:17:41.921800    3839 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 10:17:42.083496    3839 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 10:17:42.310912    3839 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 10:17:42.311295    3839 kubeadm.go:310] 
	I0731 10:17:42.311329    3839 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 10:17:42.311332    3839 kubeadm.go:310] 
	I0731 10:17:42.311369    3839 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 10:17:42.311376    3839 kubeadm.go:310] 
	I0731 10:17:42.311390    3839 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 10:17:42.311423    3839 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 10:17:42.311453    3839 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 10:17:42.311456    3839 kubeadm.go:310] 
	I0731 10:17:42.311484    3839 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 10:17:42.311487    3839 kubeadm.go:310] 
	I0731 10:17:42.311509    3839 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 10:17:42.311513    3839 kubeadm.go:310] 
	I0731 10:17:42.311539    3839 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 10:17:42.311577    3839 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 10:17:42.311614    3839 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 10:17:42.311617    3839 kubeadm.go:310] 
	I0731 10:17:42.311662    3839 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 10:17:42.311710    3839 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 10:17:42.311715    3839 kubeadm.go:310] 
	I0731 10:17:42.311761    3839 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jjgv1s.cijpkg01pfwrwhoo \
	I0731 10:17:42.311814    3839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 \
	I0731 10:17:42.311827    3839 kubeadm.go:310] 	--control-plane 
	I0731 10:17:42.311830    3839 kubeadm.go:310] 
	I0731 10:17:42.311874    3839 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 10:17:42.311879    3839 kubeadm.go:310] 
	I0731 10:17:42.311926    3839 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jjgv1s.cijpkg01pfwrwhoo \
	I0731 10:17:42.311979    3839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 
	I0731 10:17:42.312039    3839 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 10:17:42.312045    3839 cni.go:84] Creating CNI manager for ""
	I0731 10:17:42.312053    3839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:17:42.316523    3839 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 10:17:42.324483    3839 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 10:17:42.327321    3839 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 10:17:42.332247    3839 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 10:17:42.332301    3839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:17:42.332302    3839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-323000 minikube.k8s.io/updated_at=2024_07_31T10_17_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=running-upgrade-323000 minikube.k8s.io/primary=true
	I0731 10:17:42.361886    3839 ops.go:34] apiserver oom_adj: -16
	I0731 10:17:42.361973    3839 kubeadm.go:1113] duration metric: took 29.707708ms to wait for elevateKubeSystemPrivileges
	I0731 10:17:42.374837    3839 kubeadm.go:394] duration metric: took 4m15.579495208s to StartCluster
	I0731 10:17:42.374854    3839 settings.go:142] acquiring lock: {Name:mk1157c8925fbba0a07eb1816bde23d6b74fa3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:17:42.374943    3839 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:17:42.375369    3839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/kubeconfig: {Name:mk7aa7304788739007dd5984d3ae0414f5535a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:17:42.375579    3839 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:17:42.375594    3839 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 10:17:42.375632    3839 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-323000"
	I0731 10:17:42.375643    3839 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-323000"
	W0731 10:17:42.375647    3839 addons.go:243] addon storage-provisioner should already be in state true
	I0731 10:17:42.375662    3839 config.go:182] Loaded profile config "running-upgrade-323000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:17:42.375665    3839 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-323000"
	I0731 10:17:42.375670    3839 host.go:66] Checking if "running-upgrade-323000" exists ...
	I0731 10:17:42.375677    3839 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-323000"
	I0731 10:17:42.376558    3839 kapi.go:59] client config for running-upgrade-323000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/running-upgrade-323000/client.key", CAFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a6c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:17:42.376704    3839 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-323000"
	W0731 10:17:42.376709    3839 addons.go:243] addon default-storageclass should already be in state true
	I0731 10:17:42.376716    3839 host.go:66] Checking if "running-upgrade-323000" exists ...
	I0731 10:17:42.378388    3839 out.go:177] * Verifying Kubernetes components...
	I0731 10:17:42.378769    3839 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 10:17:42.382644    3839 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 10:17:42.382651    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	I0731 10:17:42.386540    3839 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:17:42.390462    3839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:17:42.394474    3839 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:17:42.394480    3839 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 10:17:42.394486    3839 sshutil.go:53] new ssh client: &{IP:localhost Port:50244 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/running-upgrade-323000/id_rsa Username:docker}
	I0731 10:17:42.478240    3839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 10:17:42.483480    3839 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:17:42.483519    3839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:17:42.487684    3839 api_server.go:72] duration metric: took 112.094666ms to wait for apiserver process to appear ...
	I0731 10:17:42.487692    3839 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:17:42.487698    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:42.532540    3839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 10:17:42.547212    3839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:17:43.206310    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:47.489836    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:47.489909    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:48.208648    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:48.208870    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:48.233220    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:48.233343    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:48.249236    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:48.249326    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:48.262195    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:48.262267    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:48.273322    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:48.273389    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:48.283545    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:48.283612    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:48.294439    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:48.294510    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:48.304498    3892 logs.go:276] 0 containers: []
	W0731 10:17:48.304508    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:48.304563    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:48.315435    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:48.315454    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:48.315460    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:48.350329    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:48.350343    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:48.362333    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:48.362346    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:48.398693    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:48.398710    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:48.410528    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:48.410539    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:48.422408    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:48.422421    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:48.440295    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:48.440306    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:48.453616    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:48.453628    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:48.468266    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:48.468280    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:48.481904    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:48.481916    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:48.496464    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:48.496474    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:48.508729    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:48.508740    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:48.521277    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:48.521290    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:48.525804    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:48.525812    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:48.540080    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:48.540093    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:48.578861    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:48.578871    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:48.598265    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:48.598276    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:51.122350    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:52.490594    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:52.490624    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:56.124850    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:56.124983    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:56.138676    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:56.138760    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:56.149869    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:56.149948    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:56.160363    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:56.160431    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:56.172007    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:56.172078    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:56.183564    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:56.183634    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:56.194777    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:56.194845    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:56.205121    3892 logs.go:276] 0 containers: []
	W0731 10:17:56.205137    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:56.205196    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:56.216161    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:56.216180    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:56.216188    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:56.255143    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:56.255155    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:56.273122    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:56.273133    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:56.312926    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:56.312937    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:56.317673    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:56.317680    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:56.356678    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:56.356691    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:56.373226    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:56.373239    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:56.385673    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:56.385685    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:56.398996    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:56.399007    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:56.410034    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:56.410045    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:56.425201    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:56.425218    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:56.438545    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:56.438556    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:56.451474    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:56.451485    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:56.467779    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:56.467792    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:56.485495    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:56.485508    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:56.498041    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:56.498054    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:56.510246    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:56.510257    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:57.491174    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:57.491224    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:59.035583    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:02.491760    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:02.491793    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:04.037985    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:04.038324    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:04.073718    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:18:04.073846    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:04.093069    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:18:04.093143    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:04.106756    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:18:04.106832    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:04.122816    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:18:04.122890    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:04.134530    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:18:04.134590    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:04.145713    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:18:04.145783    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:04.156697    3892 logs.go:276] 0 containers: []
	W0731 10:18:04.156708    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:04.156770    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:04.167640    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:18:04.167658    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:18:04.167663    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:18:04.186233    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:18:04.186245    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:18:04.225123    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:18:04.225134    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:18:04.237208    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:04.237219    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:04.241795    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:18:04.241804    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:18:04.256050    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:04.256061    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:18:04.296310    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:04.296330    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:04.352061    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:18:04.352074    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:18:04.367233    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:18:04.367245    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:18:04.379215    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:18:04.379227    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:18:04.396636    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:18:04.396646    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:18:04.415806    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:18:04.415815    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:18:04.429015    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:18:04.429025    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:18:04.442760    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:18:04.442772    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:18:04.455130    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:18:04.455139    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:18:04.466726    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:04.466737    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:04.490377    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:18:04.490390    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:07.004451    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:07.492567    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:07.492614    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:12.006685    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:12.006918    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:12.028293    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:18:12.028401    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:12.043350    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:18:12.043420    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:12.055366    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:18:12.055443    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:12.066575    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:18:12.066648    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:12.076746    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:18:12.076813    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:12.087260    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:18:12.087330    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:12.102556    3892 logs.go:276] 0 containers: []
	W0731 10:18:12.102571    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:12.102634    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:12.112942    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:18:12.112960    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:18:12.112965    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:18:12.128356    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:18:12.128367    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:12.141260    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:18:12.141276    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:18:12.493598    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:12.493637    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 10:18:12.849464    3839 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 10:18:12.854370    3839 out.go:177] * Enabled addons: storage-provisioner
	I0731 10:18:12.861278    3839 addons.go:510] duration metric: took 30.48618s for enable addons: enabled=[storage-provisioner]
	I0731 10:18:12.155631    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:18:12.155640    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:18:12.194452    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:18:12.194469    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:18:12.212597    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:18:12.212619    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:18:12.224821    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:12.224832    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:12.247029    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:12.247036    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:12.251189    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:18:12.251197    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:18:12.262876    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:18:12.262885    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:18:12.291932    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:18:12.291945    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:18:12.311692    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:12.311704    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:18:12.350239    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:18:12.350253    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:18:12.364722    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:18:12.364734    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:18:12.376750    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:12.376762    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:12.414702    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:18:12.414714    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:18:12.432989    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:18:12.433002    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:18:14.953716    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:17.494966    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:17.495001    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:19.955984    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:19.956079    3892 kubeadm.go:597] duration metric: took 4m4.223428375s to restartPrimaryControlPlane
	W0731 10:18:19.956143    3892 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 10:18:19.956171    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 10:18:20.957404    3892 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001234833s)
	I0731 10:18:20.957474    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:18:20.962432    3892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:18:20.965174    3892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:18:20.967963    3892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:18:20.967968    3892 kubeadm.go:157] found existing configuration files:
	
	I0731 10:18:20.967991    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf
	I0731 10:18:20.970637    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 10:18:20.970660    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 10:18:20.973080    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf
	I0731 10:18:20.976068    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 10:18:20.976093    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 10:18:20.979182    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf
	I0731 10:18:20.981791    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 10:18:20.981811    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 10:18:20.984658    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf
	I0731 10:18:20.987564    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 10:18:20.987587    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 10:18:20.990247    3892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 10:18:21.006982    3892 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 10:18:21.007179    3892 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 10:18:21.067731    3892 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 10:18:21.067780    3892 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 10:18:21.067826    3892 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 10:18:21.118374    3892 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 10:18:21.121602    3892 out.go:204]   - Generating certificates and keys ...
	I0731 10:18:21.121638    3892 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 10:18:21.121670    3892 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 10:18:21.121710    3892 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 10:18:21.121746    3892 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 10:18:21.121778    3892 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 10:18:21.121817    3892 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 10:18:21.121853    3892 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 10:18:21.121886    3892 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 10:18:21.121924    3892 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 10:18:21.121959    3892 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 10:18:21.121986    3892 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 10:18:21.122021    3892 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 10:18:21.268828    3892 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 10:18:21.309888    3892 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 10:18:21.431260    3892 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 10:18:21.679469    3892 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 10:18:21.708480    3892 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 10:18:21.709452    3892 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 10:18:21.709476    3892 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 10:18:21.794051    3892 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 10:18:21.798055    3892 out.go:204]   - Booting up control plane ...
	I0731 10:18:21.798123    3892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 10:18:21.798162    3892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 10:18:21.798204    3892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 10:18:21.798269    3892 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 10:18:21.798397    3892 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 10:18:22.496637    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:22.496685    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:26.796374    3892 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001004 seconds
	I0731 10:18:26.796445    3892 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 10:18:26.800388    3892 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 10:18:27.310603    3892 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 10:18:27.310880    3892 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-608000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 10:18:27.815382    3892 kubeadm.go:310] [bootstrap-token] Using token: k37bxj.v1f4hmhqqkyf239x
	I0731 10:18:27.821462    3892 out.go:204]   - Configuring RBAC rules ...
	I0731 10:18:27.821539    3892 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 10:18:27.821597    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 10:18:27.828174    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 10:18:27.829050    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 10:18:27.830213    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 10:18:27.831071    3892 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 10:18:27.834589    3892 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 10:18:27.984802    3892 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 10:18:28.220783    3892 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 10:18:28.221319    3892 kubeadm.go:310] 
	I0731 10:18:28.221350    3892 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 10:18:28.221353    3892 kubeadm.go:310] 
	I0731 10:18:28.221394    3892 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 10:18:28.221401    3892 kubeadm.go:310] 
	I0731 10:18:28.221415    3892 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 10:18:28.221441    3892 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 10:18:28.221465    3892 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 10:18:28.221469    3892 kubeadm.go:310] 
	I0731 10:18:28.221500    3892 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 10:18:28.221503    3892 kubeadm.go:310] 
	I0731 10:18:28.221529    3892 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 10:18:28.221537    3892 kubeadm.go:310] 
	I0731 10:18:28.221567    3892 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 10:18:28.221599    3892 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 10:18:28.221638    3892 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 10:18:28.221641    3892 kubeadm.go:310] 
	I0731 10:18:28.221679    3892 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 10:18:28.221720    3892 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 10:18:28.221724    3892 kubeadm.go:310] 
	I0731 10:18:28.221764    3892 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k37bxj.v1f4hmhqqkyf239x \
	I0731 10:18:28.221822    3892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 \
	I0731 10:18:28.221832    3892 kubeadm.go:310] 	--control-plane 
	I0731 10:18:28.221836    3892 kubeadm.go:310] 
	I0731 10:18:28.221898    3892 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 10:18:28.221985    3892 kubeadm.go:310] 
	I0731 10:18:28.222066    3892 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k37bxj.v1f4hmhqqkyf239x \
	I0731 10:18:28.222114    3892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 
	I0731 10:18:28.222163    3892 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 10:18:28.222175    3892 cni.go:84] Creating CNI manager for ""
	I0731 10:18:28.222182    3892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:18:28.228499    3892 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 10:18:27.498896    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:27.498923    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:28.232564    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 10:18:28.235689    3892 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 10:18:28.240422    3892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 10:18:28.240472    3892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:18:28.240476    3892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-608000 minikube.k8s.io/updated_at=2024_07_31T10_18_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=stopped-upgrade-608000 minikube.k8s.io/primary=true
	I0731 10:18:28.278996    3892 kubeadm.go:1113] duration metric: took 38.552709ms to wait for elevateKubeSystemPrivileges
	I0731 10:18:28.279010    3892 ops.go:34] apiserver oom_adj: -16
	I0731 10:18:28.289607    3892 kubeadm.go:394] duration metric: took 4m12.571124791s to StartCluster
	I0731 10:18:28.289626    3892 settings.go:142] acquiring lock: {Name:mk1157c8925fbba0a07eb1816bde23d6b74fa3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:18:28.289718    3892 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:18:28.290123    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/kubeconfig: {Name:mk7aa7304788739007dd5984d3ae0414f5535a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:18:28.290810    3892 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:18:28.290824    3892 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 10:18:28.290856    3892 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-608000"
	I0731 10:18:28.290868    3892 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-608000"
	W0731 10:18:28.290871    3892 addons.go:243] addon storage-provisioner should already be in state true
	I0731 10:18:28.290882    3892 host.go:66] Checking if "stopped-upgrade-608000" exists ...
	I0731 10:18:28.290883    3892 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-608000"
	I0731 10:18:28.290893    3892 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:18:28.290897    3892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-608000"
	I0731 10:18:28.291781    3892 kapi.go:59] client config for stopped-upgrade-608000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.key", CAFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105cb81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:18:28.291896    3892 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-608000"
	W0731 10:18:28.291901    3892 addons.go:243] addon default-storageclass should already be in state true
	I0731 10:18:28.291907    3892 host.go:66] Checking if "stopped-upgrade-608000" exists ...
	I0731 10:18:28.293549    3892 out.go:177] * Verifying Kubernetes components...
	I0731 10:18:28.293856    3892 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 10:18:28.293943    3892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 10:18:28.293949    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:18:28.297554    3892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:18:28.301487    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:18:28.305477    3892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:18:28.305483    3892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 10:18:28.305489    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:18:28.381263    3892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 10:18:28.386393    3892 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:18:28.386432    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:18:28.390284    3892 api_server.go:72] duration metric: took 99.463084ms to wait for apiserver process to appear ...
	I0731 10:18:28.390292    3892 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:18:28.390298    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:28.414390    3892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 10:18:28.429698    3892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:18:32.501013    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:32.501056    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:33.392348    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:33.392382    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:37.502594    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:37.502615    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:38.392628    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:38.392653    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:42.504750    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:42.504864    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:42.516339    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:18:42.516416    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:42.527488    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:18:42.527569    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:42.539790    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:18:42.539868    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:42.558385    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:18:42.558462    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:42.584991    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:18:42.585135    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:42.606547    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:18:42.606612    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:42.625319    3839 logs.go:276] 0 containers: []
	W0731 10:18:42.625334    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:42.625399    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:42.637181    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:18:42.637201    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:42.637207    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:18:42.657927    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:42.658032    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:42.676160    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:18:42.676180    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:18:42.695395    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:18:42.695407    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:18:42.707418    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:18:42.707432    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:18:42.726674    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:18:42.726687    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:18:42.740143    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:42.740156    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:42.767432    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:18:42.767446    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:42.779622    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:42.779633    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:42.784523    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:42.784533    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:42.827804    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:18:42.827818    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:18:42.843004    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:18:42.843016    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:18:42.858341    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:18:42.858353    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:18:42.878182    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:18:42.878204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:18:42.892120    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:42.892130    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:18:42.892158    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:18:42.892164    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:42.892169    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:42.892173    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:42.892176    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:18:43.392894    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:43.392913    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:48.393225    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:48.393268    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:52.896143    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:53.393908    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:53.393955    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:57.898803    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:57.898916    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:57.911333    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:18:57.911414    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:57.921673    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:18:57.921744    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:57.931990    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:18:57.932060    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:57.943231    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:18:57.943298    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:57.954142    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:18:57.954210    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:57.964884    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:18:57.964949    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:57.983818    3839 logs.go:276] 0 containers: []
	W0731 10:18:57.983830    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:57.983896    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:57.996645    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:18:57.996663    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:18:57.996668    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:18:58.015616    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:18:58.015628    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:18:58.029641    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:18:58.029651    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:18:58.041907    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:18:58.041919    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:18:58.057772    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:18:58.057782    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:18:58.072548    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:18:58.072562    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:18:58.084723    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:18:58.084734    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:18:58.097680    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:18:58.097694    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:18:58.109832    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:58.109846    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:58.133334    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:58.133342    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:18:58.152040    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:58.152134    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:58.169864    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:58.169878    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:58.174383    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:58.174389    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:58.209779    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:18:58.209794    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:58.221370    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:58.221384    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:18:58.221411    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:18:58.221416    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:18:58.221422    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:18:58.221427    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:18:58.221431    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:18:58.394607    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:58.394626    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 10:18:58.760336    3892 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 10:18:58.766479    3892 out.go:177] * Enabled addons: storage-provisioner
	I0731 10:18:58.773418    3892 addons.go:510] duration metric: took 30.483093667s for enable addons: enabled=[storage-provisioner]
	I0731 10:19:03.395482    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:03.395534    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:08.225365    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:08.396921    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:08.396947    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:13.227525    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:13.227715    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:13.240203    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:13.240276    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:13.251138    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:13.251210    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:13.267023    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:19:13.267092    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:13.277320    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:13.277387    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:13.397758    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:13.397774    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:13.288003    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:13.288067    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:13.298369    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:13.298428    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:13.308930    3839 logs.go:276] 0 containers: []
	W0731 10:19:13.308943    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:13.308998    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:13.319454    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:13.319467    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:13.319473    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:13.339471    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:13.339566    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:13.356994    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:13.357002    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:13.362402    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:13.362409    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:13.374198    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:13.374210    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:13.399213    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:13.399222    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:13.411483    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:13.411494    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:13.448740    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:13.448751    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:13.465413    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:13.465429    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:13.479989    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:13.480000    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:13.492690    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:13.492700    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:13.510020    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:13.510034    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:13.521519    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:13.521529    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:13.546485    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:13.546493    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:13.557790    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:13.557800    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:13.557832    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:19:13.557837    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:13.557841    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:13.557847    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:13.557850    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:19:18.399431    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:18.399476    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:23.399997    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:23.400035    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:23.561823    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:28.402189    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:28.402310    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:28.415897    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:28.415971    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:28.426737    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:28.426811    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:28.443696    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:28.443759    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:28.454150    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:28.454218    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:28.464799    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:28.464873    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:28.476791    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:28.476860    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:28.487366    3892 logs.go:276] 0 containers: []
	W0731 10:19:28.487378    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:28.487433    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:28.498518    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:28.498537    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:28.498541    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:28.535673    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:28.535685    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:28.576814    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:28.576823    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:28.591975    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:28.591987    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:28.608154    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:28.608167    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:28.626631    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:28.626645    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:28.639678    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:28.639692    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:28.646806    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:28.646815    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:28.662439    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:28.662454    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:28.677885    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:28.677897    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:28.691012    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:28.691023    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:28.704284    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:28.704295    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:28.728831    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:28.728849    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:31.243516    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:28.564037    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:28.564125    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:28.576131    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:28.576196    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:28.587832    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:28.587900    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:28.609607    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:19:28.609672    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:28.621375    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:28.621444    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:28.634003    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:28.634073    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:28.646131    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:28.646205    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:28.658004    3839 logs.go:276] 0 containers: []
	W0731 10:19:28.658016    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:28.658073    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:28.670638    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:28.670654    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:28.670659    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:28.691302    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:28.691398    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:28.709336    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:28.709349    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:28.747307    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:28.747321    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:28.762571    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:28.762582    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:28.777075    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:28.777088    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:28.791853    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:28.791868    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:28.804612    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:28.804626    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:28.816547    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:28.816563    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:28.820807    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:28.820816    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:28.833902    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:28.833913    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:28.846381    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:28.846395    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:28.858492    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:28.858501    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:28.876070    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:28.876080    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:28.899462    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:28.899471    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:28.899497    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:19:28.899502    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:28.899505    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:28.899528    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:28.899532    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:19:36.246017    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:36.246205    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:36.259681    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:36.259748    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:36.270621    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:36.270693    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:36.281687    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:36.281751    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:36.304645    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:36.304712    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:36.314963    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:36.315036    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:36.326321    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:36.326397    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:36.336142    3892 logs.go:276] 0 containers: []
	W0731 10:19:36.336153    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:36.336209    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:36.347099    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:36.347114    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:36.347121    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:36.358773    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:36.358782    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:36.381839    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:36.381847    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:36.399554    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:36.399565    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:36.411386    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:36.411401    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:36.422688    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:36.422701    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:36.440292    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:36.440303    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:36.459979    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:36.459994    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:36.487416    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:36.487426    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:36.498806    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:36.498819    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:36.537875    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:36.537885    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:36.542670    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:36.542677    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:36.580135    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:36.580151    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:39.096402    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:38.901827    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:44.098657    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:44.098744    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:44.111008    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:44.111080    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:44.123411    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:44.123479    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:44.136817    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:44.136895    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:44.148704    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:44.148781    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:44.161264    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:44.161336    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:44.177888    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:44.177961    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:44.193220    3892 logs.go:276] 0 containers: []
	W0731 10:19:44.193232    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:44.193294    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:44.205122    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:44.205136    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:44.205142    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:44.209838    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:44.209852    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:44.222707    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:44.222719    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:44.239205    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:44.239217    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:44.253828    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:44.253840    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:44.273722    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:44.273735    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:44.298530    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:44.298538    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:44.337088    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:44.337096    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:44.374265    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:44.374277    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:44.389066    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:44.389079    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:44.403012    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:44.403023    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:44.415125    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:44.415136    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:44.426981    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:44.426991    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:46.941581    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:43.904277    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:43.904575    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:43.940173    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:43.940271    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:43.960175    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:43.960244    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:43.973726    3839 logs.go:276] 2 containers: [f664f53381a1 f8da2f27925d]
	I0731 10:19:43.973804    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:43.985415    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:43.985486    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:43.998975    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:43.999047    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:44.010181    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:44.010242    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:44.021421    3839 logs.go:276] 0 containers: []
	W0731 10:19:44.021430    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:44.021480    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:44.036270    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:44.036286    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:44.036292    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:44.053478    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:44.053573    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:44.070765    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:44.070773    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:44.108641    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:44.108658    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:44.124110    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:44.124118    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:44.137330    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:44.137339    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:44.156757    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:44.156769    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:44.183891    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:44.183907    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:44.196969    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:44.196980    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:44.201861    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:44.201872    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:44.217802    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:44.217817    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:44.231141    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:44.231154    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:44.247573    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:44.247590    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:44.260884    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:44.260895    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:44.274703    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:44.274714    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:44.274739    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:19:44.274743    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:44.274760    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:44.274764    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:44.274767    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:19:51.944187    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:51.944413    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:51.968717    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:51.968820    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:51.985260    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:51.985345    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:51.999781    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:51.999861    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:52.011094    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:52.011164    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:52.022093    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:52.022161    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:52.033293    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:52.033363    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:52.044679    3892 logs.go:276] 0 containers: []
	W0731 10:19:52.044692    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:52.044751    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:52.055667    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:52.055682    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:52.055688    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:52.069536    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:52.069550    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:52.108073    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:52.108082    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:52.144077    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:52.144089    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:52.159595    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:52.159606    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:52.174655    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:52.174665    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:52.189122    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:52.189133    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:52.208956    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:52.208972    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:52.221444    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:52.221458    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:52.244713    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:52.244721    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:52.248781    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:52.248786    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:52.264207    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:52.264221    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:52.276170    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:52.276184    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:54.793572    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:54.278736    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:59.795671    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:59.795769    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:59.808620    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:59.808685    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:59.820929    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:59.820999    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:59.832633    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:59.832700    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:59.843400    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:59.843471    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:59.854536    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:59.854604    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:59.865390    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:59.865459    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:59.875986    3892 logs.go:276] 0 containers: []
	W0731 10:19:59.875996    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:59.876048    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:59.887041    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:59.887061    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:59.887066    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:59.899186    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:59.899197    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:59.915227    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:59.915237    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:59.927505    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:59.927516    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:59.962903    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:59.962913    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:59.977536    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:59.977546    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:59.989617    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:59.989629    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:00.001999    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:00.002010    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:00.017377    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:00.017387    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:00.030600    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:00.030610    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:00.067979    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:00.067992    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:00.072689    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:00.072697    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:00.090913    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:00.090924    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:59.281231    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:59.281708    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:59.318544    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:19:59.318688    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:59.339507    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:19:59.339630    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:59.354328    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:19:59.354411    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:59.366679    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:19:59.366746    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:59.377121    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:19:59.377195    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:59.387777    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:19:59.387841    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:59.398724    3839 logs.go:276] 0 containers: []
	W0731 10:19:59.398735    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:59.398789    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:59.409337    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:19:59.409353    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:59.409358    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:59.414539    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:59.414550    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:59.448975    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:19:59.448986    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:19:59.463800    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:19:59.463813    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:19:59.481501    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:59.481513    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:19:59.500936    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:59.501029    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:59.518240    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:19:59.518245    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:19:59.535192    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:19:59.535204    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:19:59.551892    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:19:59.551904    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:59.563054    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:19:59.563063    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:19:59.574984    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:19:59.574994    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:19:59.586494    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:59.586504    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:59.611972    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:19:59.611980    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:19:59.623956    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:19:59.623965    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:19:59.650617    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:19:59.650627    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:19:59.663052    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:19:59.663063    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:19:59.676289    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:59.676302    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:19:59.676331    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:19:59.676336    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:19:59.676340    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:19:59.676343    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:19:59.676346    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:02.617914    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:07.620105    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:07.620294    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:07.634839    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:07.634915    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:07.649003    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:07.649071    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:07.660776    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:07.660844    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:07.671782    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:07.671846    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:07.683369    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:07.683443    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:07.695212    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:07.695292    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:07.706963    3892 logs.go:276] 0 containers: []
	W0731 10:20:07.706975    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:07.707035    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:07.721557    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:07.721575    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:07.721582    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:07.758728    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:07.758736    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:07.794930    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:07.794944    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:07.814897    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:07.814911    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:07.827149    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:07.827159    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:07.842652    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:07.842662    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:07.855266    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:07.855276    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:07.873477    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:07.873487    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:07.886445    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:07.886456    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:07.910488    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:07.910497    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:07.915176    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:07.915182    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:07.929907    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:07.929919    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:07.947386    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:07.947397    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:10.461760    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:09.678136    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:15.463925    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:15.464079    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:15.478937    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:15.479013    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:15.490439    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:15.490513    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:15.500832    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:15.500904    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:15.511111    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:15.511178    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:15.522128    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:15.522199    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:15.532754    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:15.532813    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:15.543969    3892 logs.go:276] 0 containers: []
	W0731 10:20:15.543981    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:15.544049    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:15.554630    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:15.554644    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:15.554649    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:15.579631    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:15.579639    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:15.617946    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:15.617957    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:15.655282    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:15.655295    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:15.673862    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:15.673874    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:15.685713    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:15.685724    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:15.697566    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:15.697580    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:15.714537    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:15.714549    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:15.727310    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:15.727322    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:15.732007    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:15.732018    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:15.746676    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:15.746687    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:15.762787    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:15.762798    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:15.774977    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:15.774988    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:14.680432    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:14.680695    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:14.704308    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:20:14.704402    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:14.721078    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:20:14.721160    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:14.734289    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:20:14.734373    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:14.745516    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:20:14.745586    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:14.757072    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:20:14.757140    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:14.767444    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:20:14.767510    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:14.778283    3839 logs.go:276] 0 containers: []
	W0731 10:20:14.778298    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:14.778351    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:14.789031    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:20:14.789048    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:20:14.789053    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:20:14.808667    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:20:14.808680    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:20:14.820661    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:20:14.820671    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:20:14.832850    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:20:14.832861    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:20:14.848680    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:14.848691    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:14.853812    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:20:14.853822    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:20:14.868391    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:20:14.868401    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:20:14.879534    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:20:14.879544    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:20:14.891532    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:14.891547    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:20:14.908348    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:14.908446    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:14.925620    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:20:14.925626    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:20:14.937424    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:20:14.937434    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:14.949737    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:14.949749    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:14.985378    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:20:14.985393    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:20:15.001866    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:20:15.001877    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:20:15.013795    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:15.013810    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:15.038935    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:15.038943    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:20:15.038970    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:20:15.038974    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:15.038978    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:15.038983    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:15.038985    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:18.291643    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:23.293843    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:23.294039    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:23.314250    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:23.314330    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:23.326834    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:23.326910    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:23.338344    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:23.338403    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:23.348431    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:23.348502    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:23.359213    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:23.359306    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:23.370069    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:23.370134    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:23.380109    3892 logs.go:276] 0 containers: []
	W0731 10:20:23.380119    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:23.380171    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:23.391553    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:23.391568    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:23.391573    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:23.395824    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:23.395829    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:23.409476    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:23.409490    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:23.420948    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:23.420962    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:23.435971    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:23.435981    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:23.460342    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:23.460350    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:23.472647    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:23.472661    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:23.509793    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:23.509800    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:23.523498    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:23.523512    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:23.535267    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:23.535280    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:23.547254    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:23.547268    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:23.564527    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:23.564541    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:23.576246    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:23.576259    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:26.114433    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:25.042922    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:31.116578    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:31.116731    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:31.129668    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:31.129744    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:31.140514    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:31.140582    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:31.151300    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:31.151369    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:31.161915    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:31.161978    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:31.172785    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:31.172860    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:31.187069    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:31.187132    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:31.197201    3892 logs.go:276] 0 containers: []
	W0731 10:20:31.197215    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:31.197268    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:31.208118    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:31.208133    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:31.208139    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:31.246984    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:31.246996    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:31.261753    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:31.261765    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:31.273911    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:31.273926    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:31.285666    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:31.285675    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:31.297367    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:31.297380    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:31.314830    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:31.314841    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:31.352115    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:31.352122    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:31.356358    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:31.356364    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:31.370188    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:31.370198    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:31.384958    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:31.384971    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:31.396045    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:31.396055    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:31.419331    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:31.419341    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:30.045192    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:30.045324    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:30.058077    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:20:30.058149    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:30.069362    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:20:30.069432    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:30.080525    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:20:30.080599    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:30.091932    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:20:30.091998    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:30.102609    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:20:30.102673    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:30.113069    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:20:30.113129    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:30.123419    3839 logs.go:276] 0 containers: []
	W0731 10:20:30.123431    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:30.123482    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:30.133880    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:20:30.133898    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:20:30.133903    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:20:30.151520    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:20:30.151531    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:20:30.165612    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:20:30.165624    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:20:30.184095    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:30.184104    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:30.259369    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:20:30.259380    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:20:30.271907    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:20:30.271921    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:20:30.287339    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:20:30.287353    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:20:30.299217    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:30.299227    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:20:30.316350    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:30.316445    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:30.333518    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:20:30.333525    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:20:30.345718    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:20:30.345729    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:20:30.357874    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:30.357889    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:30.383042    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:30.383050    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:30.387153    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:20:30.387160    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:20:30.401408    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:20:30.401418    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:20:30.427065    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:20:30.427078    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:30.446470    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:30.446484    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:20:30.446509    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:20:30.446513    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:30.446517    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:30.446521    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:30.446524    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:33.934643    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:38.935043    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:38.935332    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:38.975064    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:38.975178    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:38.991030    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:38.991108    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:39.006854    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:39.006926    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:39.017746    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:39.017813    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:39.027994    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:39.028062    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:39.038907    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:39.038976    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:39.049636    3892 logs.go:276] 0 containers: []
	W0731 10:20:39.049648    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:39.049705    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:39.060111    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:39.060126    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:39.060131    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:39.074785    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:39.074794    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:39.089329    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:39.089342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:39.101309    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:39.101322    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:39.125784    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:39.125797    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:39.137348    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:39.137360    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:39.149328    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:39.149340    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:39.188467    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:39.188477    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:39.192739    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:39.192748    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:39.228145    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:39.228155    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:39.239612    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:39.239624    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:39.254932    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:39.254945    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:39.267371    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:39.267384    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:41.785621    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:40.450490    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:46.787927    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:46.788164    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:46.815339    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:46.815455    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:46.833332    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:46.833411    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:46.846886    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:20:46.846987    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:46.859539    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:46.859607    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:46.870488    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:46.870553    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:46.880925    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:46.880995    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:46.904338    3892 logs.go:276] 0 containers: []
	W0731 10:20:46.904352    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:46.904407    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:46.917070    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:46.917088    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:46.917093    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:46.932398    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:46.932408    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:46.944270    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:46.944283    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:46.961849    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:46.961859    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:46.973755    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:46.973766    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:46.978572    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:20:46.978579    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:20:46.990639    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:20:46.990649    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:20:47.002224    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:47.002236    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:47.038686    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:47.038696    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:47.051433    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:47.051444    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:47.075014    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:47.075023    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:47.086918    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:47.086930    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:47.099325    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:47.099336    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:47.137788    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:47.137798    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:45.451705    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:45.451925    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:45.468026    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:20:45.468097    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:45.481171    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:20:45.481240    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:45.492703    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:20:45.492770    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:45.503336    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:20:45.503409    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:45.513922    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:20:45.513979    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:45.524220    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:20:45.524284    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:45.534286    3839 logs.go:276] 0 containers: []
	W0731 10:20:45.534297    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:45.534354    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:45.544543    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:20:45.544562    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:45.544568    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:20:45.563893    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:45.563988    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:45.580959    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:20:45.580967    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:20:45.598727    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:20:45.598741    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:20:45.610112    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:20:45.610122    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:45.622280    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:20:45.622290    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:20:45.636448    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:20:45.636461    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:20:45.648568    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:20:45.648581    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:20:45.660380    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:20:45.660396    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:20:45.676183    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:20:45.676194    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:20:45.694950    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:45.694959    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:45.699467    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:45.699473    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:45.735189    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:45.735204    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:45.758758    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:20:45.758766    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:20:45.772096    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:20:45.772105    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:20:45.783785    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:20:45.783794    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:20:45.799428    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:45.799436    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:20:45.799465    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:20:45.799470    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:20:45.799473    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:20:45.799478    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:20:45.799481    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:20:47.151902    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:47.151912    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:49.667735    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:54.670090    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:54.670754    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:54.698561    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:54.698650    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:54.715839    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:54.715903    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:54.729613    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:20:54.729678    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:54.741587    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:54.741640    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:54.752056    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:54.752111    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:54.762722    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:54.762775    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:54.773263    3892 logs.go:276] 0 containers: []
	W0731 10:20:54.773272    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:54.773319    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:54.784072    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:54.784089    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:54.784095    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:54.823624    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:54.823634    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:54.827902    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:54.827911    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:54.842796    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:54.842806    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:54.878211    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:54.878224    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:54.890400    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:54.890409    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:54.902900    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:54.902911    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:54.928782    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:54.928792    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:54.947468    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:54.947481    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:54.959981    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:54.959991    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:54.977646    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:54.977657    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:54.990800    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:54.990813    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:55.005458    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:20:55.005469    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:20:55.017135    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:20:55.017145    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:20:55.029857    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:55.029868    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:55.802738    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:57.544532    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:00.804951    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:00.805059    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:00.816670    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:21:00.816733    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:00.830844    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:21:00.830907    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:00.841364    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:21:00.841429    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:00.852220    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:21:00.852285    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:00.869761    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:21:00.869828    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:00.883884    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:21:00.883953    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:00.893903    3839 logs.go:276] 0 containers: []
	W0731 10:21:00.893914    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:00.893972    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:00.904332    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:21:00.904352    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:21:00.904357    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:21:00.917287    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:21:00.917300    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:21:00.928861    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:21:00.928875    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:21:00.941016    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:21:00.941026    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:21:00.958721    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:00.958733    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:00.964058    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:00.964067    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:01.001795    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:21:01.001809    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:21:01.022049    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:21:01.022060    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:21:01.039958    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:21:01.039968    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:21:01.054133    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:21:01.054146    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:21:01.067777    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:21:01.067787    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:21:01.079653    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:21:01.079663    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:21:01.094311    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:01.094325    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:01.117963    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:21:01.117973    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:01.129709    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:01.129719    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:21:01.146574    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:01.146667    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:01.163722    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:01.163730    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:21:01.163752    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:21:01.163756    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:01.163769    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:01.163772    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:01.163775    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:21:02.546699    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:02.546956    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:02.572123    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:02.572224    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:02.588333    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:02.588423    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:02.602469    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:02.602546    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:02.613572    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:02.613643    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:02.624418    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:02.624487    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:02.634959    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:02.635028    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:02.646320    3892 logs.go:276] 0 containers: []
	W0731 10:21:02.646330    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:02.646390    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:02.656778    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:02.656796    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:02.656802    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:02.699504    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:02.699519    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:02.711581    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:02.711592    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:02.715869    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:02.715877    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:02.730120    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:02.730130    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:02.742005    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:02.742015    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:02.769845    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:02.769857    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:02.781864    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:02.781874    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:02.807446    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:02.807453    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:02.845379    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:02.845387    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:02.859669    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:02.859682    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:02.871580    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:02.871593    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:02.886173    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:02.886185    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:02.897554    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:02.897564    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:02.909155    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:02.909167    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:05.423683    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:10.425932    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:10.426147    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:10.445411    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:10.445501    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:10.460759    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:10.460829    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:10.473200    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:10.473279    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:10.483997    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:10.484060    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:10.494572    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:10.494645    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:10.505239    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:10.505311    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:10.515362    3892 logs.go:276] 0 containers: []
	W0731 10:21:10.515375    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:10.515433    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:10.526028    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:10.526045    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:10.526051    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:10.530383    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:10.530389    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:10.544995    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:10.545008    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:10.557332    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:10.557342    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:10.582723    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:10.582731    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:10.593875    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:10.593884    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:10.631838    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:10.631850    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:10.668545    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:10.668562    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:10.683675    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:10.683688    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:10.698183    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:10.698194    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:10.710327    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:10.710341    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:10.722110    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:10.722121    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:10.734404    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:10.734416    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:10.749133    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:10.749144    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:10.760540    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:10.760551    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:11.166503    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:13.279856    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:16.168687    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:16.168799    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:16.180596    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:21:16.180673    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:16.191472    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:21:16.191549    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:16.202200    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:21:16.202273    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:16.212798    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:21:16.212862    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:16.222668    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:21:16.222736    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:16.232892    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:21:16.232964    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:16.244927    3839 logs.go:276] 0 containers: []
	W0731 10:21:16.244938    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:16.244994    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:16.255089    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:21:16.255105    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:21:16.255110    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:21:16.267056    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:16.267066    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:16.290331    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:16.290338    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:16.294827    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:21:16.294836    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:21:16.306842    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:21:16.306851    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:21:16.319017    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:21:16.319026    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:21:16.331072    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:16.331086    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:16.367528    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:21:16.367540    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:21:16.379650    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:21:16.379662    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:21:16.397256    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:21:16.397269    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:21:16.411708    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:21:16.411718    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:16.423293    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:16.423304    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:21:16.442243    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:16.442336    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:16.459770    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:21:16.459779    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:21:16.478033    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:21:16.478046    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:21:16.491057    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:21:16.491068    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:21:16.505267    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:16.505278    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:21:16.505306    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:21:16.505311    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:16.505317    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:16.505321    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:16.505326    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:21:18.281956    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:18.282085    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:18.292542    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:18.292612    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:18.306096    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:18.306157    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:18.316822    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:18.316898    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:18.327245    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:18.327305    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:18.338228    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:18.338291    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:18.349138    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:18.349205    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:18.359561    3892 logs.go:276] 0 containers: []
	W0731 10:21:18.359574    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:18.359632    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:18.375171    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:18.375189    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:18.375202    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:18.380064    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:18.380073    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:18.391545    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:18.391560    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:18.416636    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:18.416644    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:18.455439    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:18.455449    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:18.470610    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:18.470622    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:18.483064    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:18.483075    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:18.496321    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:18.496336    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:18.509279    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:18.509290    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:18.545714    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:18.545725    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:18.559618    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:18.559628    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:18.571235    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:18.571244    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:18.590156    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:18.590165    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:18.602239    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:18.602249    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:18.619601    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:18.619616    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:21.133255    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:26.135504    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:26.135677    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:26.150486    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:26.150565    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:26.164258    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:26.164328    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:26.176147    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:26.176218    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:26.201631    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:26.201704    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:26.216807    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:26.216876    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:26.233215    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:26.233285    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:26.247535    3892 logs.go:276] 0 containers: []
	W0731 10:21:26.247551    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:26.247606    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:26.261963    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:26.261979    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:26.261985    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:26.299193    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:26.299207    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:26.320990    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:26.321002    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:26.337316    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:26.337331    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:26.348726    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:26.348740    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:26.363409    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:26.363420    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:26.380291    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:26.380303    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:26.392127    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:26.392137    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:26.396408    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:26.396415    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:26.408007    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:26.408017    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:26.420098    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:26.420108    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:26.431655    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:26.431664    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:26.471198    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:26.471206    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:26.485398    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:26.485409    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:26.497496    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:26.497507    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:26.507361    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:29.027380    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:31.509645    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:31.509867    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:31.537205    3839 logs.go:276] 1 containers: [8d30b90253cf]
	I0731 10:21:31.537293    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:31.553129    3839 logs.go:276] 1 containers: [c642df22c40f]
	I0731 10:21:31.553202    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:31.564760    3839 logs.go:276] 4 containers: [be1796385dea eb3c8b821d51 f664f53381a1 f8da2f27925d]
	I0731 10:21:31.564857    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:31.577695    3839 logs.go:276] 1 containers: [d2de42993f92]
	I0731 10:21:31.577774    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:31.588426    3839 logs.go:276] 1 containers: [3b52c0a76cc0]
	I0731 10:21:31.588496    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:31.599106    3839 logs.go:276] 1 containers: [aa8ca22e32a8]
	I0731 10:21:31.599174    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:31.609379    3839 logs.go:276] 0 containers: []
	W0731 10:21:31.609392    3839 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:31.609450    3839 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:31.620017    3839 logs.go:276] 1 containers: [89fcc353b96c]
	I0731 10:21:31.620032    3839 logs.go:123] Gathering logs for coredns [f664f53381a1] ...
	I0731 10:21:31.620038    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f664f53381a1"
	I0731 10:21:31.631803    3839 logs.go:123] Gathering logs for kube-scheduler [d2de42993f92] ...
	I0731 10:21:31.631815    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2de42993f92"
	I0731 10:21:31.647036    3839 logs.go:123] Gathering logs for kube-controller-manager [aa8ca22e32a8] ...
	I0731 10:21:31.647046    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa8ca22e32a8"
	I0731 10:21:31.664705    3839 logs.go:123] Gathering logs for container status ...
	I0731 10:21:31.664715    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:31.676710    3839 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:31.676720    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:31.719999    3839 logs.go:123] Gathering logs for coredns [eb3c8b821d51] ...
	I0731 10:21:31.720019    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb3c8b821d51"
	I0731 10:21:31.741780    3839 logs.go:123] Gathering logs for storage-provisioner [89fcc353b96c] ...
	I0731 10:21:31.741792    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89fcc353b96c"
	I0731 10:21:31.754021    3839 logs.go:123] Gathering logs for kube-proxy [3b52c0a76cc0] ...
	I0731 10:21:31.754032    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b52c0a76cc0"
	I0731 10:21:31.765955    3839 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:31.765965    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:31.771367    3839 logs.go:123] Gathering logs for kube-apiserver [8d30b90253cf] ...
	I0731 10:21:31.771377    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d30b90253cf"
	I0731 10:21:31.788946    3839 logs.go:123] Gathering logs for etcd [c642df22c40f] ...
	I0731 10:21:31.788958    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c642df22c40f"
	I0731 10:21:31.803063    3839 logs.go:123] Gathering logs for coredns [be1796385dea] ...
	I0731 10:21:31.803074    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be1796385dea"
	I0731 10:21:31.819049    3839 logs.go:123] Gathering logs for coredns [f8da2f27925d] ...
	I0731 10:21:31.819061    3839 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8da2f27925d"
	I0731 10:21:31.830470    3839 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:31.830481    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 10:21:31.849586    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:31.849682    3839 logs.go:138] Found kubelet problem: Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:31.867026    3839 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:31.867037    3839 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:31.890908    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:31.890915    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 10:21:31.890938    3839 out.go:239] X Problems detected in kubelet:
	W0731 10:21:31.890942    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: W0731 17:13:46.855491    4038 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	W0731 10:21:31.890946    3839 out.go:239]   Jul 31 17:13:46 running-upgrade-323000 kubelet[4038]: E0731 17:13:46.855583    4038 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-323000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-323000' and this object
	I0731 10:21:31.890964    3839 out.go:304] Setting ErrFile to fd 2...
	I0731 10:21:31.890969    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:21:34.029644    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:34.029889    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:34.048585    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:34.048670    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:34.062098    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:34.062177    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:34.076163    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:34.076234    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:34.088618    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:34.088684    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:34.099609    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:34.099669    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:34.111165    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:34.111235    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:34.121743    3892 logs.go:276] 0 containers: []
	W0731 10:21:34.121759    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:34.121810    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:34.132612    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:34.132629    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:34.132633    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:34.144753    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:34.144766    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:34.157259    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:34.157271    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:34.169630    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:34.169643    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:34.208085    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:34.208103    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:34.213004    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:34.213011    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:34.251626    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:34.251637    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:34.266852    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:34.266863    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:34.281884    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:34.281895    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:34.294763    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:34.294774    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:34.307301    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:34.307313    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:34.322256    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:34.322269    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:34.340673    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:34.340683    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:34.366125    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:34.366132    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:34.380221    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:34.380232    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:36.892961    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:41.894475    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:41.894668    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:41.915939    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:41.916041    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:41.932571    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:41.932647    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:41.948748    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:41.948815    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:41.960186    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:41.960248    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:41.971137    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:41.971201    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:41.982374    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:41.982434    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:41.993411    3892 logs.go:276] 0 containers: []
	W0731 10:21:41.993422    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:41.993472    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:42.014110    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:42.014129    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:42.014134    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:42.027050    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:42.027060    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:42.039163    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:42.039172    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:42.053195    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:42.053203    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:42.067561    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:42.067573    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:42.087995    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:42.088006    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:42.100069    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:42.100079    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:42.125466    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:42.125474    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:42.137838    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:42.137848    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:41.894485    3839 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:46.896741    3839 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:46.900795    3839 out.go:177] 
	W0731 10:21:46.904870    3839 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 10:21:46.904881    3839 out.go:239] * 
	W0731 10:21:46.905858    3839 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:21:46.915733    3839 out.go:177] 
	I0731 10:21:42.177677    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:42.177694    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:42.190524    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:42.190534    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:42.203593    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:42.203603    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:42.207902    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:42.207909    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:42.248193    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:42.248206    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:42.262320    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:42.262331    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:44.786579    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:49.788948    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:49.789203    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:49.817309    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:49.817410    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:49.834254    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:49.834337    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:49.849089    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:49.849167    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:49.861130    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:49.861200    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:49.872641    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:49.872704    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:49.883559    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:49.883622    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:49.894281    3892 logs.go:276] 0 containers: []
	W0731 10:21:49.894299    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:49.894355    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:49.905066    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:49.905082    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:49.905089    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:49.927151    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:49.927163    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:49.945021    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:49.945030    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:49.984732    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:49.984742    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:50.020645    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:50.020659    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:50.035578    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:50.035587    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:50.047561    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:50.047572    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:50.059690    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:50.059700    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:50.075096    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:50.075110    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:50.079498    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:50.079503    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:50.093746    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:50.093760    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:50.105675    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:50.105685    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:50.117827    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:50.117839    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:50.130292    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:50.130307    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:50.142786    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:50.142800    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:52.670027    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:57.670438    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:57.670560    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:57.688162    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:57.688248    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:57.701377    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:57.701449    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:57.712624    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:57.712699    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:57.723529    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:57.723599    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:57.734473    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:57.734545    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:57.745133    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:57.745198    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:57.755691    3892 logs.go:276] 0 containers: []
	W0731 10:21:57.755702    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:57.755763    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:57.766027    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:57.766048    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:57.766054    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:57.805071    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:57.805080    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:57.820060    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:57.820070    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:57.832083    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:57.832095    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:57.855599    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:57.855606    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:57.873434    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:57.873448    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:57.888390    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:57.888404    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:57.906410    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:57.906422    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:57.917821    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:57.917832    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:57.956726    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:57.956737    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:57.969023    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:57.969036    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:57.973996    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:57.974002    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:57.986331    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:57.986342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:57.999687    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:57.999698    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:58.012890    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:58.012902    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:22:00.526863    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-31 17:12:45 UTC, ends at Wed 2024-07-31 17:22:02 UTC. --
	Jul 31 17:21:45 running-upgrade-323000 dockerd[3161]: time="2024-07-31T17:21:45.082098477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 17:21:45 running-upgrade-323000 dockerd[3161]: time="2024-07-31T17:21:45.082125892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 17:21:45 running-upgrade-323000 dockerd[3161]: time="2024-07-31T17:21:45.082131558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 17:21:45 running-upgrade-323000 dockerd[3161]: time="2024-07-31T17:21:45.082202887Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/885ee555a9a4e5a1fcb93568f1b1bf5880a365b2ba2bfc389f00941096e37ca7 pid=16185 runtime=io.containerd.runc.v2
	Jul 31 17:21:45 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:45Z" level=error msg="ContainerStats resp: {0x40006c7580 linux}"
	Jul 31 17:21:45 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:45Z" level=error msg="ContainerStats resp: {0x40006c7740 linux}"
	Jul 31 17:21:45 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:45Z" level=error msg="ContainerStats resp: {0x4000883180 linux}"
	Jul 31 17:21:45 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:45Z" level=error msg="ContainerStats resp: {0x4000883280 linux}"
	Jul 31 17:21:45 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:45Z" level=error msg="ContainerStats resp: {0x4000882040 linux}"
	Jul 31 17:21:45 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:45Z" level=error msg="ContainerStats resp: {0x40006c76c0 linux}"
	Jul 31 17:21:45 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:45Z" level=error msg="ContainerStats resp: {0x4000882b00 linux}"
	Jul 31 17:21:46 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 17:21:51 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 17:21:55 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:55Z" level=error msg="ContainerStats resp: {0x400090b880 linux}"
	Jul 31 17:21:55 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:55Z" level=error msg="ContainerStats resp: {0x400090b9c0 linux}"
	Jul 31 17:21:56 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 17:21:57 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:57Z" level=error msg="ContainerStats resp: {0x400089b140 linux}"
	Jul 31 17:21:58 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:58Z" level=error msg="ContainerStats resp: {0x4000393080 linux}"
	Jul 31 17:21:58 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:58Z" level=error msg="ContainerStats resp: {0x400089bdc0 linux}"
	Jul 31 17:21:58 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:58Z" level=error msg="ContainerStats resp: {0x4000a12400 linux}"
	Jul 31 17:21:58 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:58Z" level=error msg="ContainerStats resp: {0x4000393c80 linux}"
	Jul 31 17:21:58 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:58Z" level=error msg="ContainerStats resp: {0x4000a13000 linux}"
	Jul 31 17:21:58 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:58Z" level=error msg="ContainerStats resp: {0x4000a137c0 linux}"
	Jul 31 17:21:58 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:21:58Z" level=error msg="ContainerStats resp: {0x40001a6600 linux}"
	Jul 31 17:22:01 running-upgrade-323000 cri-dockerd[2995]: time="2024-07-31T17:22:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8ab83ed27df20       edaa71f2aee88       17 seconds ago      Running             coredns                   2                   447d99f72da3b
	885ee555a9a4e       edaa71f2aee88       17 seconds ago      Running             coredns                   2                   3b07a0b21603e
	be1796385dea3       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   447d99f72da3b
	eb3c8b821d51b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3b07a0b21603e
	89fcc353b96c3       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   a076df047f332
	3b52c0a76cc04       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   bb0a87723b7c0
	c642df22c40f4       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   29182dd236c81
	aa8ca22e32a8c       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   ff3911fb61df6
	8d30b90253cff       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   f36e38f206c68
	d2de42993f92f       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   2868f4193e675
	
	
	==> coredns [885ee555a9a4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1289521281675400903.1339224392186106388. HINFO: read udp 10.244.0.2:52914->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1289521281675400903.1339224392186106388. HINFO: read udp 10.244.0.2:50767->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1289521281675400903.1339224392186106388. HINFO: read udp 10.244.0.2:33361->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1289521281675400903.1339224392186106388. HINFO: read udp 10.244.0.2:59902->10.0.2.3:53: i/o timeout
	
	
	==> coredns [8ab83ed27df2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6350188549789741588.6223061810595512670. HINFO: read udp 10.244.0.3:47593->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6350188549789741588.6223061810595512670. HINFO: read udp 10.244.0.3:41833->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6350188549789741588.6223061810595512670. HINFO: read udp 10.244.0.3:39271->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6350188549789741588.6223061810595512670. HINFO: read udp 10.244.0.3:35075->10.0.2.3:53: i/o timeout
	
	
	==> coredns [be1796385dea] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:35012->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:48057->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:32978->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:45797->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:49415->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:49747->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:57640->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:54396->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:52670->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 162849216057369074.5994417819266993222. HINFO: read udp 10.244.0.3:48907->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb3c8b821d51] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:41027->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:60672->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:56526->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:46632->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:53092->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:43686->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:56809->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:44269->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:41973->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7378191606036981964.586182642657088233. HINFO: read udp 10.244.0.2:56962->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-323000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-323000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9
	                    minikube.k8s.io/name=running-upgrade-323000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T10_17_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 17:17:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-323000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 17:21:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 17:17:42 +0000   Wed, 31 Jul 2024 17:17:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 17:17:42 +0000   Wed, 31 Jul 2024 17:17:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 17:17:42 +0000   Wed, 31 Jul 2024 17:17:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 17:17:42 +0000   Wed, 31 Jul 2024 17:17:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-323000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0862bfdeff14cc5bc84e134c11291c0
	  System UUID:                a0862bfdeff14cc5bc84e134c11291c0
	  Boot ID:                    482798a6-97f3-4627-a889-d4f6e8c21775
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dnsb7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 coredns-6d4b75cb6d-wrdml                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 etcd-running-upgrade-323000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-323000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-controller-manager-running-upgrade-323000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-rw5ls                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-323000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-323000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-323000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-323000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-323000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-323000 event: Registered Node running-upgrade-323000 in Controller
	
	
	==> dmesg <==
	[  +1.655352] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.076913] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.076029] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.142388] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.077790] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.081166] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[Jul31 17:13] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[ +10.145261] systemd-fstab-generator[1930]: Ignoring "noauto" for root device
	[  +4.755933] systemd-fstab-generator[2210]: Ignoring "noauto" for root device
	[  +0.150036] systemd-fstab-generator[2249]: Ignoring "noauto" for root device
	[  +0.105819] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[  +0.099505] systemd-fstab-generator[2273]: Ignoring "noauto" for root device
	[  +3.360876] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.165844] systemd-fstab-generator[2952]: Ignoring "noauto" for root device
	[  +0.082279] systemd-fstab-generator[2963]: Ignoring "noauto" for root device
	[  +0.077932] systemd-fstab-generator[2974]: Ignoring "noauto" for root device
	[  +0.092044] systemd-fstab-generator[2988]: Ignoring "noauto" for root device
	[  +2.304935] systemd-fstab-generator[3148]: Ignoring "noauto" for root device
	[  +3.527901] systemd-fstab-generator[3570]: Ignoring "noauto" for root device
	[  +1.739689] systemd-fstab-generator[4032]: Ignoring "noauto" for root device
	[ +19.551074] kauditd_printk_skb: 68 callbacks suppressed
	[Jul31 17:14] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 17:17] systemd-fstab-generator[10638]: Ignoring "noauto" for root device
	[  +5.655684] systemd-fstab-generator[11226]: Ignoring "noauto" for root device
	[  +0.464064] systemd-fstab-generator[11358]: Ignoring "noauto" for root device
	
	
	==> etcd [c642df22c40f] <==
	{"level":"info","ts":"2024-07-31T17:17:37.794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-31T17:17:37.794Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-31T17:17:37.795Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T17:17:37.795Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T17:17:37.795Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T17:17:37.795Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T17:17:37.795Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T17:17:38.438Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:17:38.439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:17:38.439Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:17:38.439Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T17:17:38.439Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-323000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T17:17:38.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:17:38.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T17:17:38.441Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-31T17:17:38.441Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T17:17:38.441Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T17:17:38.441Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 17:22:03 up 9 min,  0 users,  load average: 0.38, 0.32, 0.17
	Linux running-upgrade-323000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8d30b90253cf] <==
	I0731 17:17:39.625936       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0731 17:17:39.654701       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 17:17:39.654747       1 cache.go:39] Caches are synced for autoregister controller
	I0731 17:17:39.663597       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 17:17:39.663624       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 17:17:39.663975       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 17:17:39.676512       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 17:17:40.390687       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 17:17:40.565368       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 17:17:40.567135       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 17:17:40.567175       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 17:17:40.706455       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 17:17:40.722841       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 17:17:40.818749       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 17:17:40.820773       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0731 17:17:40.821154       1 controller.go:611] quota admission added evaluator for: endpoints
	I0731 17:17:40.822357       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 17:17:41.683488       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 17:17:42.143352       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 17:17:42.146937       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 17:17:42.172961       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 17:17:42.184944       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 17:17:56.189515       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0731 17:17:56.692879       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0731 17:17:56.739578       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [aa8ca22e32a8] <==
	I0731 17:17:55.888497       1 shared_informer.go:262] Caches are synced for HPA
	I0731 17:17:55.888502       1 shared_informer.go:262] Caches are synced for job
	I0731 17:17:55.888507       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 17:17:55.888514       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0731 17:17:55.888519       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0731 17:17:55.888523       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0731 17:17:55.890658       1 shared_informer.go:262] Caches are synced for endpoint
	I0731 17:17:55.942220       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 17:17:55.989513       1 shared_informer.go:262] Caches are synced for deployment
	I0731 17:17:55.994576       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 17:17:56.034329       1 shared_informer.go:262] Caches are synced for expand
	I0731 17:17:56.038117       1 shared_informer.go:262] Caches are synced for disruption
	I0731 17:17:56.038124       1 disruption.go:371] Sending events to api server.
	I0731 17:17:56.038962       1 shared_informer.go:262] Caches are synced for persistent volume
	I0731 17:17:56.039496       1 shared_informer.go:262] Caches are synced for PV protection
	I0731 17:17:56.039506       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0731 17:17:56.087655       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0731 17:17:56.091136       1 shared_informer.go:262] Caches are synced for attach detach
	I0731 17:17:56.193515       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rw5ls"
	I0731 17:17:56.507305       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 17:17:56.545598       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 17:17:56.545612       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 17:17:56.741134       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0731 17:17:56.891070       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dnsb7"
	I0731 17:17:56.896645       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wrdml"
	
	
	==> kube-proxy [3b52c0a76cc0] <==
	I0731 17:17:56.677844       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0731 17:17:56.677881       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0731 17:17:56.677891       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 17:17:56.690826       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 17:17:56.690840       1 server_others.go:206] "Using iptables Proxier"
	I0731 17:17:56.690853       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 17:17:56.690957       1 server.go:661] "Version info" version="v1.24.1"
	I0731 17:17:56.690965       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 17:17:56.691305       1 config.go:317] "Starting service config controller"
	I0731 17:17:56.691319       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 17:17:56.691328       1 config.go:226] "Starting endpoint slice config controller"
	I0731 17:17:56.691329       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 17:17:56.691847       1 config.go:444] "Starting node config controller"
	I0731 17:17:56.691859       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 17:17:56.791701       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 17:17:56.791726       1 shared_informer.go:262] Caches are synced for service config
	I0731 17:17:56.791910       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [d2de42993f92] <==
	W0731 17:17:39.611255       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 17:17:39.611260       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 17:17:39.611271       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 17:17:39.611275       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 17:17:39.611286       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 17:17:39.611289       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 17:17:39.611300       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 17:17:39.611303       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 17:17:39.611314       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 17:17:39.611317       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 17:17:39.611328       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 17:17:39.611332       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 17:17:39.611343       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 17:17:39.611346       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 17:17:39.611356       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 17:17:39.611359       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 17:17:39.611398       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 17:17:39.611408       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 17:17:40.539634       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 17:17:40.539657       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 17:17:40.588845       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 17:17:40.588863       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 17:17:40.653037       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 17:17:40.653142       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0731 17:17:40.997035       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-31 17:12:45 UTC, ends at Wed 2024-07-31 17:22:03 UTC. --
	Jul 31 17:17:44 running-upgrade-323000 kubelet[11232]: I0731 17:17:44.365263   11232 request.go:601] Waited for 1.119325953s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 31 17:17:44 running-upgrade-323000 kubelet[11232]: E0731 17:17:44.369539   11232 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-323000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-323000"
	Jul 31 17:17:55 running-upgrade-323000 kubelet[11232]: I0731 17:17:55.834149   11232 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 17:17:55 running-upgrade-323000 kubelet[11232]: I0731 17:17:55.920771   11232 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 17:17:55 running-upgrade-323000 kubelet[11232]: I0731 17:17:55.921220   11232 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.025397   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/26ccbce7-272a-473e-a2cb-be21684471c9-tmp\") pod \"storage-provisioner\" (UID: \"26ccbce7-272a-473e-a2cb-be21684471c9\") " pod="kube-system/storage-provisioner"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.025424   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rzv\" (UniqueName: \"kubernetes.io/projected/26ccbce7-272a-473e-a2cb-be21684471c9-kube-api-access-l9rzv\") pod \"storage-provisioner\" (UID: \"26ccbce7-272a-473e-a2cb-be21684471c9\") " pod="kube-system/storage-provisioner"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: E0731 17:17:56.129261   11232 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: E0731 17:17:56.129279   11232 projected.go:192] Error preparing data for projected volume kube-api-access-l9rzv for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: E0731 17:17:56.129310   11232 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/26ccbce7-272a-473e-a2cb-be21684471c9-kube-api-access-l9rzv podName:26ccbce7-272a-473e-a2cb-be21684471c9 nodeName:}" failed. No retries permitted until 2024-07-31 17:17:56.629298663 +0000 UTC m=+14.497450652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l9rzv" (UniqueName: "kubernetes.io/projected/26ccbce7-272a-473e-a2cb-be21684471c9-kube-api-access-l9rzv") pod "storage-provisioner" (UID: "26ccbce7-272a-473e-a2cb-be21684471c9") : configmap "kube-root-ca.crt" not found
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.197253   11232 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.330929   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/edc75f4b-eb22-4cf2-84bf-6dc62e91178a-kube-proxy\") pod \"kube-proxy-rw5ls\" (UID: \"edc75f4b-eb22-4cf2-84bf-6dc62e91178a\") " pod="kube-system/kube-proxy-rw5ls"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.330960   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edc75f4b-eb22-4cf2-84bf-6dc62e91178a-lib-modules\") pod \"kube-proxy-rw5ls\" (UID: \"edc75f4b-eb22-4cf2-84bf-6dc62e91178a\") " pod="kube-system/kube-proxy-rw5ls"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.330974   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7bpq\" (UniqueName: \"kubernetes.io/projected/edc75f4b-eb22-4cf2-84bf-6dc62e91178a-kube-api-access-m7bpq\") pod \"kube-proxy-rw5ls\" (UID: \"edc75f4b-eb22-4cf2-84bf-6dc62e91178a\") " pod="kube-system/kube-proxy-rw5ls"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.330985   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edc75f4b-eb22-4cf2-84bf-6dc62e91178a-xtables-lock\") pod \"kube-proxy-rw5ls\" (UID: \"edc75f4b-eb22-4cf2-84bf-6dc62e91178a\") " pod="kube-system/kube-proxy-rw5ls"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.898054   11232 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 17:17:56 running-upgrade-323000 kubelet[11232]: I0731 17:17:56.906075   11232 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 17:17:57 running-upgrade-323000 kubelet[11232]: I0731 17:17:57.035446   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krp7t\" (UniqueName: \"kubernetes.io/projected/fd5a8bce-c6d3-4332-a920-a8591a657745-kube-api-access-krp7t\") pod \"coredns-6d4b75cb6d-wrdml\" (UID: \"fd5a8bce-c6d3-4332-a920-a8591a657745\") " pod="kube-system/coredns-6d4b75cb6d-wrdml"
	Jul 31 17:17:57 running-upgrade-323000 kubelet[11232]: I0731 17:17:57.035476   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e703f16-299c-4734-892e-f4b00e5e8196-config-volume\") pod \"coredns-6d4b75cb6d-dnsb7\" (UID: \"5e703f16-299c-4734-892e-f4b00e5e8196\") " pod="kube-system/coredns-6d4b75cb6d-dnsb7"
	Jul 31 17:17:57 running-upgrade-323000 kubelet[11232]: I0731 17:17:57.035487   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5a8bce-c6d3-4332-a920-a8591a657745-config-volume\") pod \"coredns-6d4b75cb6d-wrdml\" (UID: \"fd5a8bce-c6d3-4332-a920-a8591a657745\") " pod="kube-system/coredns-6d4b75cb6d-wrdml"
	Jul 31 17:17:57 running-upgrade-323000 kubelet[11232]: I0731 17:17:57.035502   11232 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txtm8\" (UniqueName: \"kubernetes.io/projected/5e703f16-299c-4734-892e-f4b00e5e8196-kube-api-access-txtm8\") pod \"coredns-6d4b75cb6d-dnsb7\" (UID: \"5e703f16-299c-4734-892e-f4b00e5e8196\") " pod="kube-system/coredns-6d4b75cb6d-dnsb7"
	Jul 31 17:17:57 running-upgrade-323000 kubelet[11232]: I0731 17:17:57.432966   11232 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3b07a0b21603e1d4b1cf10a46b5b2e1880e8fa2be1849bca002646bab769acc5"
	Jul 31 17:17:57 running-upgrade-323000 kubelet[11232]: I0731 17:17:57.449756   11232 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="447d99f72da3b64b7ff093d5029dc7f5f2a59d5fe6ba843c66d7737c240a49b5"
	Jul 31 17:21:45 running-upgrade-323000 kubelet[11232]: I0731 17:21:45.727747   11232 scope.go:110] "RemoveContainer" containerID="f664f53381a1acffdec36beba70291e426485838535ff14443fe44439024db56"
	Jul 31 17:21:45 running-upgrade-323000 kubelet[11232]: I0731 17:21:45.756680   11232 scope.go:110] "RemoveContainer" containerID="f8da2f27925de8626dfb45cb136ab8e53b107b65067fca75deff0143358dec18"
	
	
	==> storage-provisioner [89fcc353b96c] <==
	I0731 17:17:56.939247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 17:17:56.945975       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 17:17:56.945992       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 17:17:56.950530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 17:17:56.950620       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-323000_57ecd98f-c933-447a-b4a6-57e6d7a53502!
	I0731 17:17:56.951077       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8d5061d-2484-45c7-8a19-2dd12589e647", APIVersion:"v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-323000_57ecd98f-c933-447a-b4a6-57e6d7a53502 became leader
	I0731 17:17:57.053452       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-323000_57ecd98f-c933-447a-b4a6-57e6d7a53502!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-323000 -n running-upgrade-323000
E0731 10:22:03.511674    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-323000 -n running-upgrade-323000: exit status 2 (15.576604875s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-323000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-323000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-323000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-323000: (1.165841541s)
--- FAIL: TestRunningBinaryUpgrade (590.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-013000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-013000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.095841042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-013000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-013000" primary control-plane node in "kubernetes-upgrade-013000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-013000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:12:10.377287    3751 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:12:10.377447    3751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:12:10.377450    3751 out.go:304] Setting ErrFile to fd 2...
	I0731 10:12:10.377453    3751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:12:10.377576    3751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:12:10.378633    3751 out.go:298] Setting JSON to false
	I0731 10:12:10.394649    3751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2494,"bootTime":1722443436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:12:10.394711    3751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:12:10.399121    3751 out.go:177] * [kubernetes-upgrade-013000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:12:10.406142    3751 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:12:10.406182    3751 notify.go:220] Checking for updates...
	I0731 10:12:10.413174    3751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:12:10.416106    3751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:12:10.419184    3751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:12:10.422163    3751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:12:10.423533    3751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:12:10.426490    3751 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:12:10.426552    3751 config.go:182] Loaded profile config "offline-docker-290000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:12:10.426606    3751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:12:10.431107    3751 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:12:10.437139    3751 start.go:297] selected driver: qemu2
	I0731 10:12:10.437147    3751 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:12:10.437154    3751 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:12:10.439366    3751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:12:10.444180    3751 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:12:10.445842    3751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 10:12:10.445857    3751 cni.go:84] Creating CNI manager for ""
	I0731 10:12:10.445863    3751 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 10:12:10.445886    3751 start.go:340] cluster config:
	{Name:kubernetes-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:12:10.449502    3751 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:12:10.458175    3751 out.go:177] * Starting "kubernetes-upgrade-013000" primary control-plane node in "kubernetes-upgrade-013000" cluster
	I0731 10:12:10.462133    3751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 10:12:10.462150    3751 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 10:12:10.462167    3751 cache.go:56] Caching tarball of preloaded images
	I0731 10:12:10.462250    3751 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:12:10.462256    3751 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 10:12:10.462317    3751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/kubernetes-upgrade-013000/config.json ...
	I0731 10:12:10.462335    3751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/kubernetes-upgrade-013000/config.json: {Name:mk280d5f0e9422bfd8a98ed6ed7c68306077cc16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:12:10.462552    3751 start.go:360] acquireMachinesLock for kubernetes-upgrade-013000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:12:10.475322    3751 start.go:364] duration metric: took 12.762875ms to acquireMachinesLock for "kubernetes-upgrade-013000"
	I0731 10:12:10.475338    3751 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:12:10.475394    3751 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:12:10.484128    3751 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:12:10.502956    3751 start.go:159] libmachine.API.Create for "kubernetes-upgrade-013000" (driver="qemu2")
	I0731 10:12:10.502986    3751 client.go:168] LocalClient.Create starting
	I0731 10:12:10.503053    3751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:12:10.503082    3751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:10.503092    3751 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:10.503131    3751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:12:10.503155    3751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:10.503165    3751 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:10.503504    3751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:12:10.665813    3751 main.go:141] libmachine: Creating SSH key...
	I0731 10:12:10.707892    3751 main.go:141] libmachine: Creating Disk image...
	I0731 10:12:10.707897    3751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:12:10.708098    3751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:10.717430    3751 main.go:141] libmachine: STDOUT: 
	I0731 10:12:10.717450    3751 main.go:141] libmachine: STDERR: 
	I0731 10:12:10.717499    3751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2 +20000M
	I0731 10:12:10.725331    3751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:12:10.725347    3751 main.go:141] libmachine: STDERR: 
	I0731 10:12:10.725366    3751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:10.725370    3751 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:12:10.725381    3751 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:12:10.725406    3751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:7b:a2:8a:e1:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:10.727098    3751 main.go:141] libmachine: STDOUT: 
	I0731 10:12:10.727112    3751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:12:10.727131    3751 client.go:171] duration metric: took 224.144583ms to LocalClient.Create
	I0731 10:12:12.729281    3751 start.go:128] duration metric: took 2.253899667s to createHost
	I0731 10:12:12.729332    3751 start.go:83] releasing machines lock for "kubernetes-upgrade-013000", held for 2.254039125s
	W0731 10:12:12.729411    3751 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:12.735581    3751 out.go:177] * Deleting "kubernetes-upgrade-013000" in qemu2 ...
	W0731 10:12:12.770060    3751 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:12.770091    3751 start.go:729] Will try again in 5 seconds ...
	I0731 10:12:17.772342    3751 start.go:360] acquireMachinesLock for kubernetes-upgrade-013000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:12:18.008299    3751 start.go:364] duration metric: took 235.849ms to acquireMachinesLock for "kubernetes-upgrade-013000"
	I0731 10:12:18.008474    3751 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:12:18.008667    3751 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:12:18.018040    3751 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:12:18.066338    3751 start.go:159] libmachine.API.Create for "kubernetes-upgrade-013000" (driver="qemu2")
	I0731 10:12:18.066392    3751 client.go:168] LocalClient.Create starting
	I0731 10:12:18.066519    3751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:12:18.066571    3751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:18.066588    3751 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:18.066641    3751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:12:18.066670    3751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:12:18.066680    3751 main.go:141] libmachine: Parsing certificate...
	I0731 10:12:18.067189    3751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:12:18.258896    3751 main.go:141] libmachine: Creating SSH key...
	I0731 10:12:18.388696    3751 main.go:141] libmachine: Creating Disk image...
	I0731 10:12:18.388705    3751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:12:18.388866    3751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:18.398211    3751 main.go:141] libmachine: STDOUT: 
	I0731 10:12:18.398228    3751 main.go:141] libmachine: STDERR: 
	I0731 10:12:18.398288    3751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2 +20000M
	I0731 10:12:18.406606    3751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:12:18.406626    3751 main.go:141] libmachine: STDERR: 
	I0731 10:12:18.406637    3751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:18.406643    3751 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:12:18.406651    3751 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:12:18.406684    3751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:16:2a:2e:c0:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:18.408381    3751 main.go:141] libmachine: STDOUT: 
	I0731 10:12:18.408395    3751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:12:18.408408    3751 client.go:171] duration metric: took 342.017834ms to LocalClient.Create
	I0731 10:12:20.410636    3751 start.go:128] duration metric: took 2.401960375s to createHost
	I0731 10:12:20.410698    3751 start.go:83] releasing machines lock for "kubernetes-upgrade-013000", held for 2.402390542s
	W0731 10:12:20.411002    3751 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-013000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:20.419616    3751 out.go:177] 
	W0731 10:12:20.423740    3751 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:12:20.423773    3751 out.go:239] * 
	* 
	W0731 10:12:20.426215    3751 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:12:20.432694    3751 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-013000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-013000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-013000: (4.088027292s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-013000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-013000 status --format={{.Host}}: exit status 7 (63.723083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-013000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-013000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.210894s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-013000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-013000" primary control-plane node in "kubernetes-upgrade-013000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-013000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:12:24.632733    3801 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:12:24.632852    3801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:12:24.632855    3801 out.go:304] Setting ErrFile to fd 2...
	I0731 10:12:24.632857    3801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:12:24.632973    3801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:12:24.634041    3801 out.go:298] Setting JSON to false
	I0731 10:12:24.650620    3801 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2508,"bootTime":1722443436,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:12:24.650700    3801 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:12:24.655479    3801 out.go:177] * [kubernetes-upgrade-013000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:12:24.663528    3801 notify.go:220] Checking for updates...
	I0731 10:12:24.667449    3801 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:12:24.674467    3801 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:12:24.681489    3801 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:12:24.689446    3801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:12:24.697323    3801 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:12:24.705467    3801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:12:24.708713    3801 config.go:182] Loaded profile config "kubernetes-upgrade-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 10:12:24.708963    3801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:12:24.713510    3801 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:12:24.720476    3801 start.go:297] selected driver: qemu2
	I0731 10:12:24.720481    3801 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-013000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:12:24.720529    3801 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:12:24.722921    3801 cni.go:84] Creating CNI manager for ""
	I0731 10:12:24.722938    3801 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:12:24.722973    3801 start.go:340] cluster config:
	{Name:kubernetes-upgrade-013000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-013000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:12:24.726557    3801 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:12:24.734293    3801 out.go:177] * Starting "kubernetes-upgrade-013000" primary control-plane node in "kubernetes-upgrade-013000" cluster
	I0731 10:12:24.738447    3801 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 10:12:24.738464    3801 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 10:12:24.738485    3801 cache.go:56] Caching tarball of preloaded images
	I0731 10:12:24.738562    3801 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:12:24.738568    3801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 10:12:24.738641    3801 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/kubernetes-upgrade-013000/config.json ...
	I0731 10:12:24.739133    3801 start.go:360] acquireMachinesLock for kubernetes-upgrade-013000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:12:24.739170    3801 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "kubernetes-upgrade-013000"
	I0731 10:12:24.739180    3801 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:12:24.739190    3801 fix.go:54] fixHost starting: 
	I0731 10:12:24.739314    3801 fix.go:112] recreateIfNeeded on kubernetes-upgrade-013000: state=Stopped err=<nil>
	W0731 10:12:24.739325    3801 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:12:24.746426    3801 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-013000" ...
	I0731 10:12:24.750436    3801 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:12:24.750484    3801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:16:2a:2e:c0:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:24.752585    3801 main.go:141] libmachine: STDOUT: 
	I0731 10:12:24.752604    3801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:12:24.752635    3801 fix.go:56] duration metric: took 13.448875ms for fixHost
	I0731 10:12:24.752644    3801 start.go:83] releasing machines lock for "kubernetes-upgrade-013000", held for 13.46525ms
	W0731 10:12:24.752651    3801 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:12:24.752684    3801 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:24.752689    3801 start.go:729] Will try again in 5 seconds ...
	I0731 10:12:29.752962    3801 start.go:360] acquireMachinesLock for kubernetes-upgrade-013000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:12:29.753416    3801 start.go:364] duration metric: took 343.958µs to acquireMachinesLock for "kubernetes-upgrade-013000"
	I0731 10:12:29.753548    3801 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:12:29.753569    3801 fix.go:54] fixHost starting: 
	I0731 10:12:29.754244    3801 fix.go:112] recreateIfNeeded on kubernetes-upgrade-013000: state=Stopped err=<nil>
	W0731 10:12:29.754271    3801 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:12:29.763655    3801 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-013000" ...
	I0731 10:12:29.767752    3801 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:12:29.767999    3801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:16:2a:2e:c0:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubernetes-upgrade-013000/disk.qcow2
	I0731 10:12:29.777980    3801 main.go:141] libmachine: STDOUT: 
	I0731 10:12:29.778042    3801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:12:29.778121    3801 fix.go:56] duration metric: took 24.553084ms for fixHost
	I0731 10:12:29.778144    3801 start.go:83] releasing machines lock for "kubernetes-upgrade-013000", held for 24.7025ms
	W0731 10:12:29.778326    3801 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-013000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:12:29.785641    3801 out.go:177] 
	W0731 10:12:29.789966    3801 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:12:29.789996    3801 out.go:239] * 
	* 
	W0731 10:12:29.792486    3801 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:12:29.801769    3801 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-013000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-013000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-013000 version --output=json: exit status 1 (61.146667ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-013000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-31 10:12:29.876453 -0700 PDT m=+2004.848143918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-013000 -n kubernetes-upgrade-013000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-013000 -n kubernetes-upgrade-013000: exit status 7 (33.923208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-013000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-013000
--- FAIL: TestKubernetesUpgrade (19.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (610.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2362016945 start -p stopped-upgrade-608000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2362016945 start -p stopped-upgrade-608000 --memory=2200 --vm-driver=qemu2 : (1m15.959972625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2362016945 -p stopped-upgrade-608000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2362016945 -p stopped-upgrade-608000 stop: (12.097642208s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-608000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0731 10:17:03.516489    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 10:18:14.282628    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 10:20:06.582769    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-608000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.057593459s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-608000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-608000" primary control-plane node in "stopped-upgrade-608000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-608000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:13:47.144531    3892 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:13:47.144712    3892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:13:47.144716    3892 out.go:304] Setting ErrFile to fd 2...
	I0731 10:13:47.144718    3892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:13:47.144846    3892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:13:47.145803    3892 out.go:298] Setting JSON to false
	I0731 10:13:47.164069    3892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2591,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:13:47.164149    3892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:13:47.168986    3892 out.go:177] * [stopped-upgrade-608000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:13:47.177148    3892 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:13:47.177211    3892 notify.go:220] Checking for updates...
	I0731 10:13:47.185129    3892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:13:47.186393    3892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:13:47.189111    3892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:13:47.192133    3892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:13:47.195139    3892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:13:47.198441    3892 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:13:47.202092    3892 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 10:13:47.205968    3892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:13:47.210177    3892 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:13:47.220115    3892 start.go:297] selected driver: qemu2
	I0731 10:13:47.220127    3892 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:13:47.220192    3892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:13:47.223128    3892 cni.go:84] Creating CNI manager for ""
	I0731 10:13:47.223150    3892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:13:47.223174    3892 start.go:340] cluster config:
	{Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:13:47.223235    3892 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:13:47.228202    3892 out.go:177] * Starting "stopped-upgrade-608000" primary control-plane node in "stopped-upgrade-608000" cluster
	I0731 10:13:47.236101    3892 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 10:13:47.236122    3892 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 10:13:47.236137    3892 cache.go:56] Caching tarball of preloaded images
	I0731 10:13:47.236214    3892 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:13:47.236221    3892 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 10:13:47.236289    3892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/config.json ...
	I0731 10:13:47.236582    3892 start.go:360] acquireMachinesLock for stopped-upgrade-608000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:13:47.236616    3892 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "stopped-upgrade-608000"
	I0731 10:13:47.236625    3892 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:13:47.236631    3892 fix.go:54] fixHost starting: 
	I0731 10:13:47.236744    3892 fix.go:112] recreateIfNeeded on stopped-upgrade-608000: state=Stopped err=<nil>
	W0731 10:13:47.236752    3892 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:13:47.245134    3892 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-608000" ...
	I0731 10:13:47.248120    3892 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:13:47.248190    3892 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50281-:22,hostfwd=tcp::50282-:2376,hostname=stopped-upgrade-608000 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/disk.qcow2
	I0731 10:13:47.294470    3892 main.go:141] libmachine: STDOUT: 
	I0731 10:13:47.294496    3892 main.go:141] libmachine: STDERR: 
	I0731 10:13:47.294502    3892 main.go:141] libmachine: Waiting for VM to start (ssh -p 50281 docker@127.0.0.1)...
	I0731 10:14:07.314987    3892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/config.json ...
	I0731 10:14:07.315567    3892 machine.go:94] provisionDockerMachine start ...
	I0731 10:14:07.315709    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.316056    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.316069    3892 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 10:14:07.388054    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 10:14:07.388082    3892 buildroot.go:166] provisioning hostname "stopped-upgrade-608000"
	I0731 10:14:07.388212    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.388432    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.388443    3892 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-608000 && echo "stopped-upgrade-608000" | sudo tee /etc/hostname
	I0731 10:14:07.457818    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-608000
	
	I0731 10:14:07.457886    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.458023    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.458034    3892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-608000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-608000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-608000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 10:14:07.518569    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 10:14:07.518584    3892 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19349-1049/.minikube CaCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19349-1049/.minikube}
	I0731 10:14:07.518594    3892 buildroot.go:174] setting up certificates
	I0731 10:14:07.518599    3892 provision.go:84] configureAuth start
	I0731 10:14:07.518608    3892 provision.go:143] copyHostCerts
	I0731 10:14:07.518685    3892 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem, removing ...
	I0731 10:14:07.518692    3892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem
	I0731 10:14:07.518804    3892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/cert.pem (1123 bytes)
	I0731 10:14:07.519011    3892 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem, removing ...
	I0731 10:14:07.519015    3892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem
	I0731 10:14:07.519078    3892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/key.pem (1679 bytes)
	I0731 10:14:07.519203    3892 exec_runner.go:144] found /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem, removing ...
	I0731 10:14:07.519207    3892 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem
	I0731 10:14:07.519269    3892 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.pem (1082 bytes)
	I0731 10:14:07.519375    3892 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-608000 san=[127.0.0.1 localhost minikube stopped-upgrade-608000]
	I0731 10:14:07.663867    3892 provision.go:177] copyRemoteCerts
	I0731 10:14:07.663911    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 10:14:07.663920    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:14:07.694711    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 10:14:07.701534    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 10:14:07.708352    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 10:14:07.715722    3892 provision.go:87] duration metric: took 197.121125ms to configureAuth
	I0731 10:14:07.715730    3892 buildroot.go:189] setting minikube options for container-runtime
	I0731 10:14:07.715849    3892 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:14:07.715882    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.715967    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.715973    3892 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 10:14:07.770941    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 10:14:07.770949    3892 buildroot.go:70] root file system type: tmpfs
	I0731 10:14:07.770997    3892 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 10:14:07.771041    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.771142    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.771174    3892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 10:14:07.832064    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 10:14:07.832119    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:07.832234    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:07.832245    3892 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 10:14:08.177276    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 10:14:08.177292    3892 machine.go:97] duration metric: took 861.729625ms to provisionDockerMachine
	I0731 10:14:08.177298    3892 start.go:293] postStartSetup for "stopped-upgrade-608000" (driver="qemu2")
	I0731 10:14:08.177305    3892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 10:14:08.177361    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 10:14:08.177369    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:14:08.208679    3892 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 10:14:08.210051    3892 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 10:14:08.210060    3892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19349-1049/.minikube/addons for local assets ...
	I0731 10:14:08.210146    3892 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19349-1049/.minikube/files for local assets ...
	I0731 10:14:08.210273    3892 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem -> 15522.pem in /etc/ssl/certs
	I0731 10:14:08.210408    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 10:14:08.212952    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem --> /etc/ssl/certs/15522.pem (1708 bytes)
	I0731 10:14:08.220166    3892 start.go:296] duration metric: took 42.863417ms for postStartSetup
	I0731 10:14:08.220178    3892 fix.go:56] duration metric: took 20.983913s for fixHost
	I0731 10:14:08.220207    3892 main.go:141] libmachine: Using SSH client type: native
	I0731 10:14:08.220313    3892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104922a10] 0x104925270 <nil>  [] 0s} localhost 50281 <nil> <nil>}
	I0731 10:14:08.220321    3892 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 10:14:08.274397    3892 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722446048.222363337
	
	I0731 10:14:08.274404    3892 fix.go:216] guest clock: 1722446048.222363337
	I0731 10:14:08.274408    3892 fix.go:229] Guest: 2024-07-31 10:14:08.222363337 -0700 PDT Remote: 2024-07-31 10:14:08.22018 -0700 PDT m=+21.094966668 (delta=2.183337ms)
	I0731 10:14:08.274417    3892 fix.go:200] guest clock delta is within tolerance: 2.183337ms
	I0731 10:14:08.274421    3892 start.go:83] releasing machines lock for "stopped-upgrade-608000", held for 21.038163667s
	I0731 10:14:08.274486    3892 ssh_runner.go:195] Run: cat /version.json
	I0731 10:14:08.274497    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:14:08.274487    3892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 10:14:08.274535    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	W0731 10:14:08.275041    3892 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50281: connect: connection refused
	I0731 10:14:08.275064    3892 retry.go:31] will retry after 148.398824ms: dial tcp [::1]:50281: connect: connection refused
	W0731 10:14:08.301543    3892 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 10:14:08.301595    3892 ssh_runner.go:195] Run: systemctl --version
	I0731 10:14:08.303515    3892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 10:14:08.305074    3892 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 10:14:08.305098    3892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 10:14:08.307916    3892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 10:14:08.312210    3892 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 10:14:08.312219    3892 start.go:495] detecting cgroup driver to use...
	I0731 10:14:08.312297    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:14:08.319449    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 10:14:08.322363    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 10:14:08.325132    3892 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 10:14:08.325151    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 10:14:08.328555    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:14:08.331927    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 10:14:08.335345    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 10:14:08.338319    3892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 10:14:08.341013    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 10:14:08.344314    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 10:14:08.347934    3892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 10:14:08.351318    3892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 10:14:08.353806    3892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 10:14:08.356717    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:08.412180    3892 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 10:14:08.419063    3892 start.go:495] detecting cgroup driver to use...
	I0731 10:14:08.419130    3892 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 10:14:08.424421    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 10:14:08.432955    3892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 10:14:08.443408    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 10:14:08.448510    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 10:14:08.453429    3892 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 10:14:08.496249    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 10:14:08.502346    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:14:08.507851    3892 ssh_runner.go:195] Run: which cri-dockerd
	I0731 10:14:08.509022    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 10:14:08.511623    3892 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 10:14:08.516627    3892 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 10:14:08.600541    3892 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 10:14:08.671613    3892 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 10:14:08.671670    3892 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 10:14:08.677027    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:08.757025    3892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 10:14:09.909817    3892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152793542s)
	I0731 10:14:09.909881    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 10:14:09.914271    3892 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 10:14:09.919075    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 10:14:09.923445    3892 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 10:14:09.991823    3892 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 10:14:10.050754    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:10.112700    3892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 10:14:10.119041    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 10:14:10.123229    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:10.195092    3892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 10:14:10.233498    3892 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 10:14:10.233565    3892 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 10:14:10.235479    3892 start.go:563] Will wait 60s for crictl version
	I0731 10:14:10.235520    3892 ssh_runner.go:195] Run: which crictl
	I0731 10:14:10.237065    3892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 10:14:10.251887    3892 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 10:14:10.251952    3892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 10:14:10.267858    3892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 10:14:10.292737    3892 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 10:14:10.292848    3892 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 10:14:10.294093    3892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:14:10.297557    3892 kubeadm.go:883] updating cluster {Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 10:14:10.297601    3892 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 10:14:10.297640    3892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 10:14:10.308124    3892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 10:14:10.308131    3892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 10:14:10.308170    3892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 10:14:10.311717    3892 ssh_runner.go:195] Run: which lz4
	I0731 10:14:10.312964    3892 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 10:14:10.314186    3892 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 10:14:10.314195    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 10:14:11.267551    3892 docker.go:649] duration metric: took 954.633375ms to copy over tarball
	I0731 10:14:11.267608    3892 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 10:14:12.425256    3892 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157652584s)
	I0731 10:14:12.425269    3892 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 10:14:12.440851    3892 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 10:14:12.444498    3892 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 10:14:12.449627    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:12.520585    3892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 10:14:14.034728    3892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.514148666s)
	I0731 10:14:14.034821    3892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 10:14:14.048658    3892 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 10:14:14.048668    3892 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 10:14:14.048673    3892 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 10:14:14.053657    3892 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.055376    3892 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.057366    3892 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.057559    3892 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.059224    3892 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.059374    3892 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.061369    3892 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.061490    3892 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.062235    3892 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 10:14:14.063742    3892 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.063968    3892 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.064060    3892 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.065352    3892 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.065776    3892 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 10:14:14.067251    3892 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.068306    3892 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.455326    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.467721    3892 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 10:14:14.467747    3892 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.467807    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 10:14:14.477666    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.480743    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 10:14:14.487826    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.490464    3892 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 10:14:14.490482    3892 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.490517    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 10:14:14.501707    3892 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 10:14:14.501729    3892 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.501787    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 10:14:14.506333    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 10:14:14.512157    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 10:14:14.516674    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.518949    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 10:14:14.528864    3892 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 10:14:14.528889    3892 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.528941    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 10:14:14.537196    3892 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 10:14:14.537221    3892 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 10:14:14.537269    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 10:14:14.544175    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 10:14:14.544298    3892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 10:14:14.549905    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 10:14:14.549936    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 10:14:14.549946    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 10:14:14.550008    3892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 10:14:14.551699    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 10:14:14.551718    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0731 10:14:14.570839    3892 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 10:14:14.571011    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.580787    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.582889    3892 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 10:14:14.582900    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 10:14:14.609775    3892 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 10:14:14.609795    3892 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.609773    3892 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 10:14:14.609838    3892 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.609848    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 10:14:14.609914    3892 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 10:14:14.660129    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 10:14:14.660129    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 10:14:14.660226    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 10:14:14.660333    3892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 10:14:14.675191    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 10:14:14.675220    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0731 10:14:14.694213    3892 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 10:14:14.694332    3892 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.730488    3892 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 10:14:14.730510    3892 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.730572    3892 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:14:14.779734    3892 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 10:14:14.779870    3892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 10:14:14.783654    3892 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 10:14:14.783664    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 10:14:14.785174    3892 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 10:14:14.785200    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 10:14:14.897988    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 10:14:14.898016    3892 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 10:14:14.898024    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 10:14:15.194002    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 10:14:15.194027    3892 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 10:14:15.194033    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 10:14:15.348595    3892 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 10:14:15.348632    3892 cache_images.go:92] duration metric: took 1.299975125s to LoadCachedImages
	W0731 10:14:15.348676    3892 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0731 10:14:15.348683    3892 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 10:14:15.348743    3892 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-608000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 10:14:15.348808    3892 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 10:14:15.362515    3892 cni.go:84] Creating CNI manager for ""
	I0731 10:14:15.362526    3892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:14:15.362532    3892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 10:14:15.362541    3892 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-608000 NodeName:stopped-upgrade-608000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 10:14:15.362607    3892 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-608000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 10:14:15.362659    3892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 10:14:15.365373    3892 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 10:14:15.365399    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 10:14:15.368416    3892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 10:14:15.373306    3892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 10:14:15.378264    3892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 10:14:15.383522    3892 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 10:14:15.384712    3892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:14:15.388467    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:14:15.451870    3892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 10:14:15.458356    3892 certs.go:68] Setting up /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000 for IP: 10.0.2.15
	I0731 10:14:15.458366    3892 certs.go:194] generating shared ca certs ...
	I0731 10:14:15.458374    3892 certs.go:226] acquiring lock for ca certs: {Name:mkb79625058c6882a075e84437e973267a00b06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.458528    3892 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.key
	I0731 10:14:15.458582    3892 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.key
	I0731 10:14:15.458590    3892 certs.go:256] generating profile certs ...
	I0731 10:14:15.458664    3892 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.key
	I0731 10:14:15.458681    3892 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f
	I0731 10:14:15.458691    3892 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 10:14:15.589394    3892 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f ...
	I0731 10:14:15.589410    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f: {Name:mkdac14c68f1209e9af466ff8479005de0539e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.589927    3892 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f ...
	I0731 10:14:15.589933    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f: {Name:mkab18db9ee2ced4e2fc9a122561b6f64ffd5956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.590106    3892 certs.go:381] copying /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt.6ce5540f -> /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt
	I0731 10:14:15.590271    3892 certs.go:385] copying /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key.6ce5540f -> /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key
	I0731 10:14:15.590419    3892 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/proxy-client.key
	I0731 10:14:15.590562    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552.pem (1338 bytes)
	W0731 10:14:15.590593    3892 certs.go:480] ignoring /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552_empty.pem, impossibly tiny 0 bytes
	I0731 10:14:15.590598    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 10:14:15.590618    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem (1082 bytes)
	I0731 10:14:15.590637    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem (1123 bytes)
	I0731 10:14:15.590657    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/key.pem (1679 bytes)
	I0731 10:14:15.590697    3892 certs.go:484] found cert: /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem (1708 bytes)
	I0731 10:14:15.591049    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 10:14:15.598008    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 10:14:15.604568    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 10:14:15.611913    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 10:14:15.619320    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 10:14:15.626313    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 10:14:15.632839    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 10:14:15.640019    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 10:14:15.647475    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/1552.pem --> /usr/share/ca-certificates/1552.pem (1338 bytes)
	I0731 10:14:15.654275    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/ssl/certs/15522.pem --> /usr/share/ca-certificates/15522.pem (1708 bytes)
	I0731 10:14:15.660757    3892 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 10:14:15.668127    3892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 10:14:15.674677    3892 ssh_runner.go:195] Run: openssl version
	I0731 10:14:15.676539    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1552.pem && ln -fs /usr/share/ca-certificates/1552.pem /etc/ssl/certs/1552.pem"
	I0731 10:14:15.679862    3892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1552.pem
	I0731 10:14:15.681398    3892 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 16:45 /usr/share/ca-certificates/1552.pem
	I0731 10:14:15.681419    3892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1552.pem
	I0731 10:14:15.683319    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1552.pem /etc/ssl/certs/51391683.0"
	I0731 10:14:15.686139    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15522.pem && ln -fs /usr/share/ca-certificates/15522.pem /etc/ssl/certs/15522.pem"
	I0731 10:14:15.689120    3892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15522.pem
	I0731 10:14:15.690658    3892 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 16:45 /usr/share/ca-certificates/15522.pem
	I0731 10:14:15.690678    3892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15522.pem
	I0731 10:14:15.692416    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15522.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 10:14:15.695869    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 10:14:15.699162    3892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:14:15.700610    3892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 16:39 /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:14:15.700630    3892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:14:15.702397    3892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 10:14:15.705205    3892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 10:14:15.706707    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 10:14:15.708588    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 10:14:15.710551    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 10:14:15.712446    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 10:14:15.714393    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 10:14:15.716185    3892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 10:14:15.718132    3892 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50348 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 10:14:15.718200    3892 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 10:14:15.728678    3892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 10:14:15.732149    3892 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 10:14:15.732159    3892 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 10:14:15.732188    3892 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 10:14:15.734988    3892 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 10:14:15.735303    3892 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-608000" does not appear in /Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:14:15.735400    3892 kubeconfig.go:62] /Users/jenkins/minikube-integration/19349-1049/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-608000" cluster setting kubeconfig missing "stopped-upgrade-608000" context setting]
	I0731 10:14:15.735595    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/kubeconfig: {Name:mk7aa7304788739007dd5984d3ae0414f5535a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:14:15.736310    3892 kapi.go:59] client config for stopped-upgrade-608000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.key", CAFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105cb81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:14:15.736655    3892 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 10:14:15.739504    3892 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-608000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 10:14:15.739509    3892 kubeadm.go:1160] stopping kube-system containers ...
	I0731 10:14:15.739547    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 10:14:15.754597    3892 docker.go:483] Stopping containers: [a2950ef315b5 7b8c410172e6 5892fd1585da a8cae9097f6b d04cca0c827c 150271b28573 c224a139058d 7dddf93acbff]
	I0731 10:14:15.754659    3892 ssh_runner.go:195] Run: docker stop a2950ef315b5 7b8c410172e6 5892fd1585da a8cae9097f6b d04cca0c827c 150271b28573 c224a139058d 7dddf93acbff
	I0731 10:14:15.765447    3892 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 10:14:15.771186    3892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:14:15.773980    3892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:14:15.773988    3892 kubeadm.go:157] found existing configuration files:
	
	I0731 10:14:15.774014    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf
	I0731 10:14:15.776608    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 10:14:15.776631    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 10:14:15.779627    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf
	I0731 10:14:15.782175    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 10:14:15.782194    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 10:14:15.784757    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf
	I0731 10:14:15.787577    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 10:14:15.787597    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 10:14:15.790131    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf
	I0731 10:14:15.792625    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 10:14:15.792645    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 10:14:15.795654    3892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:14:15.798335    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:15.821289    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.388061    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.497697    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.519825    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 10:14:16.547263    3892 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:14:16.547348    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:14:17.049483    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:14:17.549439    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:14:17.554219    3892 api_server.go:72] duration metric: took 1.006977125s to wait for apiserver process to appear ...
	I0731 10:14:17.554227    3892 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:14:17.554236    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:22.556336    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:22.556382    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:27.557290    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:27.557335    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:32.558862    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:32.558905    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:37.560242    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:37.560312    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:42.561931    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:42.561973    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:47.563413    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:47.563496    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:52.565268    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:52.565306    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:14:57.567334    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:14:57.567408    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:02.570022    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:02.570101    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:07.572585    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:07.572631    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:12.574774    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:12.574848    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:17.577348    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:17.577568    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:17.606953    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:17.607072    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:17.623872    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:17.623959    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:17.636886    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:17.636963    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:17.647824    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:17.647902    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:17.658482    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:17.658562    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:17.669098    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:17.669169    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:17.679232    3892 logs.go:276] 0 containers: []
	W0731 10:15:17.679242    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:17.679295    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:17.689328    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:17.689347    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:17.689355    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:17.700151    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:17.700161    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:17.714279    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:17.714293    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:17.725461    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:17.725473    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:17.740551    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:17.740563    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:17.758333    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:17.758344    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:17.771058    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:17.771068    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:17.775665    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:17.775673    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:17.854774    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:17.854788    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:17.871545    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:17.871557    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:17.882804    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:17.882814    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:17.921821    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:17.921831    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:17.935033    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:17.935044    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:17.959254    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:17.959267    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:18.001274    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:18.001285    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:18.021779    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:18.021790    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:18.036739    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:18.036752    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:20.550525    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:25.552904    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:25.553137    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:25.573763    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:25.573869    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:25.589605    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:25.589696    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:25.602065    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:25.602130    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:25.612394    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:25.612466    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:25.628606    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:25.628683    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:25.638963    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:25.639029    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:25.649272    3892 logs.go:276] 0 containers: []
	W0731 10:15:25.649283    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:25.649345    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:25.659715    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:25.659733    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:25.659738    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:25.671189    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:25.671201    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:25.675679    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:25.675687    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:25.690498    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:25.690509    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:25.702828    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:25.702842    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:25.728572    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:25.728581    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:25.742552    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:25.742563    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:25.780568    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:25.780578    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:25.791820    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:25.791833    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:25.802809    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:25.802820    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:25.814446    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:25.814457    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:25.852431    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:25.852441    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:25.867301    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:25.867311    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:25.879333    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:25.879342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:25.904259    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:25.904269    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:25.917109    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:25.917117    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:25.955598    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:25.955611    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:28.471994    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:33.474293    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:33.474497    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:33.494168    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:33.494258    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:33.507484    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:33.507556    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:33.519289    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:33.519357    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:33.530382    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:33.530457    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:33.540724    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:33.540788    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:33.552186    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:33.552259    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:33.562374    3892 logs.go:276] 0 containers: []
	W0731 10:15:33.562384    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:33.562445    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:33.575656    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:33.575675    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:33.575693    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:33.579759    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:33.579766    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:33.616269    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:33.616278    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:33.627403    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:33.627420    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:33.638566    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:33.638577    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:33.675519    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:33.675526    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:33.711286    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:33.711299    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:33.725765    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:33.725776    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:33.737867    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:33.737878    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:33.754928    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:33.754939    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:33.766893    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:33.766905    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:33.784023    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:33.784032    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:33.802920    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:33.802930    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:33.817289    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:33.817298    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:33.832786    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:33.832797    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:33.845504    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:33.845514    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:33.857080    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:33.857090    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:36.384585    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:41.386929    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:41.387078    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:41.401553    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:41.401637    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:41.413373    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:41.413440    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:41.424155    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:41.424220    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:41.439269    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:41.439337    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:41.449261    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:41.449322    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:41.459676    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:41.459745    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:41.469274    3892 logs.go:276] 0 containers: []
	W0731 10:15:41.469285    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:41.469337    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:41.479987    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:41.480004    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:41.480010    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:41.493075    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:41.493085    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:41.504331    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:41.504342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:41.518727    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:41.518739    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:41.555632    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:41.555645    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:41.593121    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:41.593135    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:41.606813    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:41.606827    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:41.618573    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:41.618584    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:41.643281    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:41.643289    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:41.655253    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:41.655264    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:41.667220    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:41.667232    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:41.705462    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:41.705470    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:41.716788    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:41.716801    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:41.731641    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:41.731651    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:41.748837    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:41.748850    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:41.760758    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:41.760771    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:41.765567    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:41.765576    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:44.281275    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:49.283549    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:49.283741    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:49.301217    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:49.301307    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:49.314982    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:49.315057    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:49.326459    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:49.326528    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:49.337268    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:49.337348    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:49.350480    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:49.350553    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:49.364429    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:49.364501    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:49.378428    3892 logs.go:276] 0 containers: []
	W0731 10:15:49.378440    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:49.378497    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:49.389296    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:49.389313    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:49.389318    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:49.403203    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:49.403214    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:49.414677    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:49.414688    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:49.452641    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:49.452654    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:49.470843    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:49.470855    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:49.485251    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:49.485262    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:49.497063    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:49.497074    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:49.508336    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:49.508346    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:15:49.535581    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:49.535590    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:49.539581    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:49.539588    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:49.553819    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:49.553829    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:49.566026    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:49.566037    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:49.583690    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:49.583699    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:49.595661    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:49.595671    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:49.634836    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:49.634845    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:49.674149    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:49.674159    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:49.685140    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:49.685150    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:52.199642    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:15:57.201941    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:15:57.202204    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:15:57.240391    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:15:57.240534    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:15:57.262320    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:15:57.262423    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:15:57.283862    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:15:57.283939    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:15:57.296648    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:15:57.296718    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:15:57.307436    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:15:57.307511    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:15:57.319443    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:15:57.319519    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:15:57.329921    3892 logs.go:276] 0 containers: []
	W0731 10:15:57.329933    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:15:57.329992    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:15:57.340437    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:15:57.340453    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:15:57.340458    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:15:57.356316    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:15:57.356329    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:15:57.394298    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:15:57.394309    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:15:57.433737    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:15:57.433751    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:15:57.437959    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:15:57.437966    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:15:57.477812    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:15:57.477824    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:15:57.495669    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:15:57.495682    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:15:57.506764    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:15:57.506777    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:15:57.518223    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:15:57.518233    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:15:57.545261    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:15:57.545274    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:15:57.559622    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:15:57.559632    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:15:57.571739    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:15:57.571749    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:15:57.586864    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:15:57.586874    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:15:57.604108    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:15:57.604121    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:15:57.619747    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:15:57.619757    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:15:57.631507    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:15:57.631521    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:15:57.642942    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:15:57.642953    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:00.170351    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:05.172597    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:05.172802    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:05.194471    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:05.194605    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:05.210277    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:05.210365    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:05.222429    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:05.222506    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:05.233641    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:05.233711    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:05.244134    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:05.244198    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:05.255002    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:05.255078    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:05.265143    3892 logs.go:276] 0 containers: []
	W0731 10:16:05.265156    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:05.265215    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:05.275720    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:05.275738    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:05.275743    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:05.290624    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:05.290638    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:05.303676    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:05.303686    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:05.327508    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:05.327520    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:05.345458    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:05.345468    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:05.385873    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:05.385887    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:05.424256    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:05.424268    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:05.438463    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:05.438473    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:05.452167    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:05.452177    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:05.467390    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:05.467401    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:05.485389    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:05.485400    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:05.499799    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:05.499808    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:05.514256    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:05.514267    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:05.531656    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:05.531667    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:05.543137    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:05.543148    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:05.547736    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:05.547744    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:05.581348    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:05.581358    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:08.098041    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:13.100686    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:13.101130    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:13.140595    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:13.140739    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:13.162036    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:13.162140    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:13.177472    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:13.177549    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:13.189779    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:13.189855    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:13.200657    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:13.200750    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:13.211540    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:13.211615    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:13.222340    3892 logs.go:276] 0 containers: []
	W0731 10:16:13.222351    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:13.222412    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:13.233276    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:13.233295    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:13.233301    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:13.251494    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:13.251504    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:13.265418    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:13.265429    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:13.277549    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:13.277563    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:13.289277    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:13.289289    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:13.304495    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:13.304506    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:13.341893    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:13.341904    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:13.346003    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:13.346009    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:13.380501    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:13.380515    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:13.418017    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:13.418031    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:13.438630    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:13.438641    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:13.450307    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:13.450320    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:13.466340    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:13.466352    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:13.483771    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:13.483782    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:13.496480    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:13.496493    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:13.507926    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:13.507936    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:13.531307    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:13.531314    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:16.044673    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:21.047285    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:21.047732    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:21.089357    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:21.089497    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:21.111029    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:21.111146    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:21.126649    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:21.126724    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:21.139630    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:21.139706    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:21.151032    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:21.151103    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:21.163490    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:21.163585    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:21.174764    3892 logs.go:276] 0 containers: []
	W0731 10:16:21.174777    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:21.174840    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:21.189055    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:21.189074    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:21.189081    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:21.193314    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:21.193320    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:21.205431    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:21.205441    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:21.217235    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:21.217248    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:21.234936    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:21.234946    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:21.248749    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:21.248760    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:21.285985    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:21.285994    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:21.300131    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:21.300141    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:21.340281    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:21.340291    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:21.354105    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:21.354115    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:21.365657    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:21.365668    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:21.380337    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:21.380347    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:21.391202    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:21.391213    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:21.403723    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:21.403733    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:21.445575    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:21.445586    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:21.460345    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:21.460355    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:21.485598    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:21.485610    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:24.010628    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:29.013209    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:29.013495    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:29.046861    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:29.046990    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:29.066270    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:29.066363    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:29.080820    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:29.080900    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:29.096658    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:29.096732    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:29.106994    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:29.107061    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:29.117427    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:29.117500    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:29.127259    3892 logs.go:276] 0 containers: []
	W0731 10:16:29.127272    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:29.127329    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:29.137794    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:29.137813    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:29.137821    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:29.149788    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:29.149798    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:29.161142    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:29.161153    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:29.185047    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:29.185057    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:29.198018    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:29.198032    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:29.211928    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:29.211937    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:29.223489    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:29.223500    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:29.240572    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:29.240583    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:29.254312    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:29.254324    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:29.269088    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:29.269101    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:29.280177    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:29.280188    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:29.294808    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:29.294819    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:29.308582    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:29.308595    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:29.347159    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:29.347168    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:29.351528    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:29.351537    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:29.386797    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:29.386811    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:29.425065    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:29.425082    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:31.940895    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:36.943185    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:36.943500    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:36.980876    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:36.980984    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:37.000513    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:37.000589    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:37.013391    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:37.013459    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:37.025700    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:37.025763    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:37.036013    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:37.036075    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:37.046669    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:37.046735    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:37.056891    3892 logs.go:276] 0 containers: []
	W0731 10:16:37.056904    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:37.056970    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:37.067688    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:37.067706    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:37.067712    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:37.081366    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:37.081379    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:37.095493    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:37.095506    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:37.109408    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:37.109419    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:37.121164    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:37.121178    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:37.159892    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:37.159900    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:37.194251    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:37.194264    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:37.209282    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:37.209295    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:37.220692    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:37.220701    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:37.232040    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:37.232050    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:37.236528    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:37.236535    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:37.248760    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:37.248776    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:37.260923    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:37.260933    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:37.278909    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:37.278922    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:37.293574    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:37.293584    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:37.335040    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:37.335051    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:37.349986    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:37.349996    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:39.876648    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:44.878775    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:44.878929    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:44.893586    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:44.893665    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:44.905107    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:44.905176    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:44.915410    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:44.915473    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:44.926318    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:44.926388    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:44.940657    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:44.940719    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:44.951414    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:44.951474    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:44.961669    3892 logs.go:276] 0 containers: []
	W0731 10:16:44.961678    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:44.961726    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:44.972009    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:44.972026    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:44.972032    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:45.012441    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:45.012454    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:45.050014    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:45.050025    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:45.064217    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:45.064228    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:45.075967    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:45.075979    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:45.087548    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:45.087558    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:45.099635    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:45.099646    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:45.138044    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:45.138056    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:45.152349    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:45.152362    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:45.167326    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:45.167342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:45.185169    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:45.185181    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:45.197665    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:45.197677    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:45.208872    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:45.208882    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:45.212860    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:45.212870    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:45.226693    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:45.226703    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:45.238523    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:45.238533    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:45.250618    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:45.250628    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:47.777364    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:16:52.779479    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:16:52.779657    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:16:52.803613    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:16:52.803698    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:16:52.815819    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:16:52.815888    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:16:52.827213    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:16:52.827284    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:16:52.837821    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:16:52.837895    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:16:52.848107    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:16:52.848169    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:16:52.858781    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:16:52.858854    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:16:52.868922    3892 logs.go:276] 0 containers: []
	W0731 10:16:52.868932    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:16:52.868987    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:16:52.879642    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:16:52.879661    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:16:52.879667    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:16:52.891752    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:16:52.891763    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:16:52.903204    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:16:52.903219    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:16:52.940164    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:16:52.940172    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:16:52.953724    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:16:52.953735    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:16:52.968466    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:16:52.968477    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:16:52.979703    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:16:52.979713    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:16:53.002491    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:16:53.002499    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:16:53.014950    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:16:53.014961    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:16:53.032383    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:16:53.032393    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:16:53.046512    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:16:53.046522    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:16:53.059612    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:16:53.059623    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:16:53.096083    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:16:53.096094    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:16:53.109359    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:16:53.109371    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:16:53.126784    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:16:53.126794    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:16:53.131254    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:16:53.131260    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:16:53.169486    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:16:53.169496    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:16:55.686159    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:00.688458    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:00.688662    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:00.708680    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:00.708772    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:00.723646    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:00.723724    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:00.736018    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:00.736090    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:00.746706    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:00.746763    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:00.757208    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:00.757274    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:00.767878    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:00.767943    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:00.781823    3892 logs.go:276] 0 containers: []
	W0731 10:17:00.781837    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:00.781891    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:00.792712    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:00.792742    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:00.792748    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:00.828155    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:00.828166    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:00.870557    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:00.870567    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:00.884261    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:00.884274    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:00.898642    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:00.898655    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:00.922371    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:00.922382    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:00.936038    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:00.936050    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:00.949299    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:00.949313    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:00.964137    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:00.964150    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:00.976527    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:00.976541    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:00.988468    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:00.988479    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:01.003506    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:01.003518    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:01.015688    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:01.015698    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:01.052319    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:01.052328    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:01.056138    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:01.056146    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:01.067118    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:01.067128    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:01.078385    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:01.078396    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:03.605215    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:08.607441    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:08.607720    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:08.636895    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:08.637011    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:08.654198    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:08.654282    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:08.668169    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:08.668368    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:08.680504    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:08.680579    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:08.691000    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:08.691066    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:08.701768    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:08.701837    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:08.711818    3892 logs.go:276] 0 containers: []
	W0731 10:17:08.711828    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:08.711881    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:08.722165    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:08.722183    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:08.722188    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:08.738654    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:08.738664    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:08.752430    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:08.752444    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:08.766378    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:08.766387    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:08.778517    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:08.778528    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:08.793949    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:08.793965    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:08.810979    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:08.810994    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:08.824264    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:08.824274    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:08.863431    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:08.863443    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:08.901558    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:08.901571    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:08.912406    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:08.912421    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:08.949342    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:08.949355    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:08.964106    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:08.964119    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:08.975752    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:08.975762    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:08.999821    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:08.999829    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:09.011298    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:09.011309    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:09.015695    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:09.015703    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:11.531605    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:16.534290    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:16.534481    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:16.553448    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:16.553544    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:16.568494    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:16.568572    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:16.580931    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:16.581006    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:16.591536    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:16.591610    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:16.602077    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:16.602144    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:16.612306    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:16.612376    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:16.622663    3892 logs.go:276] 0 containers: []
	W0731 10:17:16.622674    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:16.622731    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:16.633215    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:16.633231    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:16.633238    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:16.653955    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:16.653969    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:16.670314    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:16.670326    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:16.674740    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:16.674750    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:16.686790    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:16.686804    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:16.701343    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:16.701352    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:16.714611    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:16.714620    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:16.732506    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:16.732516    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:16.747652    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:16.747662    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:16.786414    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:16.786421    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:16.820935    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:16.820952    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:16.865538    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:16.865548    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:16.879547    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:16.879562    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:16.903407    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:16.903416    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:16.915028    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:16.915037    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:16.929607    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:16.929617    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:16.947666    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:16.947676    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:19.467576    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:24.467810    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:24.468060    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:24.497062    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:24.497164    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:24.512877    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:24.512959    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:24.526165    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:24.526238    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:24.537255    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:24.537317    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:24.547793    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:24.547863    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:24.558235    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:24.558294    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:24.568881    3892 logs.go:276] 0 containers: []
	W0731 10:17:24.568893    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:24.568952    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:24.579605    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:24.579623    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:24.579629    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:24.583615    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:24.583622    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:24.620568    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:24.620578    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:24.632538    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:24.632549    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:24.649975    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:24.649985    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:24.686836    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:24.686843    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:24.722096    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:24.722108    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:24.737288    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:24.737297    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:24.752496    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:24.752507    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:24.763785    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:24.763795    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:24.775662    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:24.775673    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:24.788750    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:24.788762    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:24.800003    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:24.800012    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:24.810826    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:24.810838    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:24.833729    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:24.833738    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:24.847597    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:24.847607    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:24.861352    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:24.861362    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:27.378608    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:32.381339    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:32.381540    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:32.397030    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:32.397120    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:32.408943    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:32.409019    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:32.419554    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:32.419614    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:32.430317    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:32.430395    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:32.440780    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:32.440848    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:32.451210    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:32.451274    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:32.462868    3892 logs.go:276] 0 containers: []
	W0731 10:17:32.462883    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:32.462934    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:32.474898    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:32.474917    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:32.474923    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:32.486695    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:32.486706    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:32.499834    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:32.499849    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:32.513645    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:32.513658    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:32.528622    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:32.528632    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:32.539697    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:32.539709    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:32.559400    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:32.559413    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:32.597648    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:32.597657    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:32.602322    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:32.602329    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:32.637449    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:32.637461    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:32.649875    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:32.649889    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:32.665334    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:32.665347    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:32.683247    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:32.683259    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:32.706669    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:32.706676    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:32.717631    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:32.717642    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:32.757303    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:32.757314    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:32.769476    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:32.769487    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:35.282955    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:40.285115    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:40.285268    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:40.299709    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:40.299787    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:40.312077    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:40.312142    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:40.323329    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:40.323399    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:40.334254    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:40.334379    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:40.345320    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:40.345393    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:40.356372    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:40.356442    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:40.366827    3892 logs.go:276] 0 containers: []
	W0731 10:17:40.366842    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:40.366903    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:40.377851    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:40.377867    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:40.377875    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:40.382514    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:40.382520    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:40.420688    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:40.420703    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:40.439749    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:40.439765    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:40.463396    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:40.463407    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:40.476629    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:40.476644    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:40.491355    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:40.491366    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:40.507673    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:40.507686    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:40.520564    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:40.520575    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:40.560910    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:40.560930    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:40.574085    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:40.574097    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:40.590618    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:40.590635    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:40.631519    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:40.631534    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:40.647300    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:40.647315    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:40.660715    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:40.660728    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:40.679227    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:40.679238    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:40.692820    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:40.692831    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:43.206310    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:48.208648    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:48.208870    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:48.233220    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:48.233343    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:48.249236    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:48.249326    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:48.262195    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:48.262267    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:48.273322    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:48.273389    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:48.283545    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:48.283612    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:48.294439    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:48.294510    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:48.304498    3892 logs.go:276] 0 containers: []
	W0731 10:17:48.304508    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:48.304563    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:48.315435    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:48.315454    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:48.315460    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:48.350329    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:48.350343    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:48.362333    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:48.362346    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:48.398693    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:48.398710    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:48.410528    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:48.410539    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:48.422408    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:48.422421    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:48.440295    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:48.440306    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:48.453616    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:48.453628    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:48.468266    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:48.468280    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:48.481904    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:48.481916    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:48.496464    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:48.496474    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:48.508729    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:48.508740    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:48.521277    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:48.521290    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:48.525804    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:48.525812    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:48.540080    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:48.540093    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:48.578861    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:48.578871    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:48.598265    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:48.598276    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:51.122350    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:17:56.124850    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:17:56.124983    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:17:56.138676    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:17:56.138760    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:17:56.149869    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:17:56.149948    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:17:56.160363    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:17:56.160431    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:17:56.172007    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:17:56.172078    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:17:56.183564    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:17:56.183634    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:17:56.194777    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:17:56.194845    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:17:56.205121    3892 logs.go:276] 0 containers: []
	W0731 10:17:56.205137    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:17:56.205196    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:17:56.216161    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:17:56.216180    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:17:56.216188    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:17:56.255143    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:17:56.255155    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:17:56.273122    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:17:56.273133    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:17:56.312926    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:17:56.312937    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:17:56.317673    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:17:56.317680    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:17:56.356678    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:17:56.356691    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:17:56.373226    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:17:56.373239    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:17:56.385673    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:17:56.385685    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:17:56.398996    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:17:56.399007    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:17:56.410034    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:17:56.410045    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:17:56.425201    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:17:56.425218    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:17:56.438545    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:17:56.438556    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:17:56.451474    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:17:56.451485    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:17:56.467779    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:17:56.467792    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:17:56.485495    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:17:56.485508    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:17:56.498041    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:17:56.498054    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:17:56.510246    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:17:56.510257    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:17:59.035583    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:04.037985    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:04.038324    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:04.073718    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:18:04.073846    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:04.093069    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:18:04.093143    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:04.106756    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:18:04.106832    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:04.122816    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:18:04.122890    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:04.134530    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:18:04.134590    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:04.145713    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:18:04.145783    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:04.156697    3892 logs.go:276] 0 containers: []
	W0731 10:18:04.156708    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:04.156770    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:04.167640    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:18:04.167658    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:18:04.167663    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:18:04.186233    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:18:04.186245    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:18:04.225123    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:18:04.225134    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:18:04.237208    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:04.237219    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:04.241795    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:18:04.241804    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:18:04.256050    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:04.256061    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:18:04.296310    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:04.296330    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:04.352061    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:18:04.352074    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:18:04.367233    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:18:04.367245    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:18:04.379215    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:18:04.379227    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:18:04.396636    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:18:04.396646    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:18:04.415806    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:18:04.415815    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:18:04.429015    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:18:04.429025    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:18:04.442760    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:18:04.442772    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:18:04.455130    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:18:04.455139    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:18:04.466726    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:04.466737    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:04.490377    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:18:04.490390    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:07.004451    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:12.006685    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:12.006918    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:18:12.028293    3892 logs.go:276] 2 containers: [6ed76ce441d6 7b8c410172e6]
	I0731 10:18:12.028401    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:18:12.043350    3892 logs.go:276] 2 containers: [a90451008061 a2950ef315b5]
	I0731 10:18:12.043420    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:18:12.055366    3892 logs.go:276] 1 containers: [4ab2144f7870]
	I0731 10:18:12.055443    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:18:12.066575    3892 logs.go:276] 2 containers: [221a4e6295fd d04cca0c827c]
	I0731 10:18:12.066648    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:18:12.076746    3892 logs.go:276] 1 containers: [fbe53d9ecd77]
	I0731 10:18:12.076813    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:18:12.087260    3892 logs.go:276] 2 containers: [bb0c215eaad8 a8cae9097f6b]
	I0731 10:18:12.087330    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:18:12.102556    3892 logs.go:276] 0 containers: []
	W0731 10:18:12.102571    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:18:12.102634    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:18:12.112942    3892 logs.go:276] 2 containers: [697b72d26647 84a98b95ce7c]
	I0731 10:18:12.112960    3892 logs.go:123] Gathering logs for kube-scheduler [d04cca0c827c] ...
	I0731 10:18:12.112965    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04cca0c827c"
	I0731 10:18:12.128356    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:18:12.128367    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:18:12.141260    3892 logs.go:123] Gathering logs for kube-apiserver [6ed76ce441d6] ...
	I0731 10:18:12.141276    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed76ce441d6"
	I0731 10:18:12.155631    3892 logs.go:123] Gathering logs for kube-apiserver [7b8c410172e6] ...
	I0731 10:18:12.155640    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b8c410172e6"
	I0731 10:18:12.194452    3892 logs.go:123] Gathering logs for etcd [a90451008061] ...
	I0731 10:18:12.194469    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a90451008061"
	I0731 10:18:12.212597    3892 logs.go:123] Gathering logs for storage-provisioner [697b72d26647] ...
	I0731 10:18:12.212619    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697b72d26647"
	I0731 10:18:12.224821    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:18:12.224832    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:18:12.247029    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:18:12.247036    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:18:12.251189    3892 logs.go:123] Gathering logs for kube-proxy [fbe53d9ecd77] ...
	I0731 10:18:12.251197    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe53d9ecd77"
	I0731 10:18:12.262876    3892 logs.go:123] Gathering logs for kube-controller-manager [a8cae9097f6b] ...
	I0731 10:18:12.262885    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8cae9097f6b"
	I0731 10:18:12.291932    3892 logs.go:123] Gathering logs for storage-provisioner [84a98b95ce7c] ...
	I0731 10:18:12.291945    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84a98b95ce7c"
	I0731 10:18:12.311692    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:18:12.311704    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:18:12.350239    3892 logs.go:123] Gathering logs for etcd [a2950ef315b5] ...
	I0731 10:18:12.350253    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2950ef315b5"
	I0731 10:18:12.364722    3892 logs.go:123] Gathering logs for coredns [4ab2144f7870] ...
	I0731 10:18:12.364734    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ab2144f7870"
	I0731 10:18:12.376750    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:18:12.376762    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:18:12.414702    3892 logs.go:123] Gathering logs for kube-scheduler [221a4e6295fd] ...
	I0731 10:18:12.414714    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 221a4e6295fd"
	I0731 10:18:12.432989    3892 logs.go:123] Gathering logs for kube-controller-manager [bb0c215eaad8] ...
	I0731 10:18:12.433002    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0c215eaad8"
	I0731 10:18:14.953716    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:19.955984    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:19.956079    3892 kubeadm.go:597] duration metric: took 4m4.223428375s to restartPrimaryControlPlane
	W0731 10:18:19.956143    3892 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 10:18:19.956171    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 10:18:20.957404    3892 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.001234833s)
	I0731 10:18:20.957474    3892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:18:20.962432    3892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:18:20.965174    3892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:18:20.967963    3892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:18:20.967968    3892 kubeadm.go:157] found existing configuration files:
	
	I0731 10:18:20.967991    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf
	I0731 10:18:20.970637    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 10:18:20.970660    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 10:18:20.973080    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf
	I0731 10:18:20.976068    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 10:18:20.976093    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 10:18:20.979182    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf
	I0731 10:18:20.981791    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 10:18:20.981811    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 10:18:20.984658    3892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf
	I0731 10:18:20.987564    3892 kubeadm.go:163] "https://control-plane.minikube.internal:50348" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50348 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 10:18:20.987587    3892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 10:18:20.990247    3892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 10:18:21.006982    3892 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 10:18:21.007179    3892 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 10:18:21.067731    3892 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 10:18:21.067780    3892 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 10:18:21.067826    3892 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 10:18:21.118374    3892 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 10:18:21.121602    3892 out.go:204]   - Generating certificates and keys ...
	I0731 10:18:21.121638    3892 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 10:18:21.121670    3892 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 10:18:21.121710    3892 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 10:18:21.121746    3892 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 10:18:21.121778    3892 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 10:18:21.121817    3892 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 10:18:21.121853    3892 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 10:18:21.121886    3892 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 10:18:21.121924    3892 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 10:18:21.121959    3892 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 10:18:21.121986    3892 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 10:18:21.122021    3892 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 10:18:21.268828    3892 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 10:18:21.309888    3892 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 10:18:21.431260    3892 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 10:18:21.679469    3892 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 10:18:21.708480    3892 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 10:18:21.709452    3892 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 10:18:21.709476    3892 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 10:18:21.794051    3892 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 10:18:21.798055    3892 out.go:204]   - Booting up control plane ...
	I0731 10:18:21.798123    3892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 10:18:21.798162    3892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 10:18:21.798204    3892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 10:18:21.798269    3892 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 10:18:21.798397    3892 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 10:18:26.796374    3892 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001004 seconds
	I0731 10:18:26.796445    3892 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 10:18:26.800388    3892 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 10:18:27.310603    3892 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 10:18:27.310880    3892 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-608000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 10:18:27.815382    3892 kubeadm.go:310] [bootstrap-token] Using token: k37bxj.v1f4hmhqqkyf239x
	I0731 10:18:27.821462    3892 out.go:204]   - Configuring RBAC rules ...
	I0731 10:18:27.821539    3892 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 10:18:27.821597    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 10:18:27.828174    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 10:18:27.829050    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 10:18:27.830213    3892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 10:18:27.831071    3892 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 10:18:27.834589    3892 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 10:18:27.984802    3892 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 10:18:28.220783    3892 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 10:18:28.221319    3892 kubeadm.go:310] 
	I0731 10:18:28.221350    3892 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 10:18:28.221353    3892 kubeadm.go:310] 
	I0731 10:18:28.221394    3892 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 10:18:28.221401    3892 kubeadm.go:310] 
	I0731 10:18:28.221415    3892 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 10:18:28.221441    3892 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 10:18:28.221465    3892 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 10:18:28.221469    3892 kubeadm.go:310] 
	I0731 10:18:28.221500    3892 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 10:18:28.221503    3892 kubeadm.go:310] 
	I0731 10:18:28.221529    3892 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 10:18:28.221537    3892 kubeadm.go:310] 
	I0731 10:18:28.221567    3892 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 10:18:28.221599    3892 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 10:18:28.221638    3892 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 10:18:28.221641    3892 kubeadm.go:310] 
	I0731 10:18:28.221679    3892 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 10:18:28.221720    3892 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 10:18:28.221724    3892 kubeadm.go:310] 
	I0731 10:18:28.221764    3892 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k37bxj.v1f4hmhqqkyf239x \
	I0731 10:18:28.221822    3892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 \
	I0731 10:18:28.221832    3892 kubeadm.go:310] 	--control-plane 
	I0731 10:18:28.221836    3892 kubeadm.go:310] 
	I0731 10:18:28.221898    3892 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 10:18:28.221985    3892 kubeadm.go:310] 
	I0731 10:18:28.222066    3892 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k37bxj.v1f4hmhqqkyf239x \
	I0731 10:18:28.222114    3892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63ada9bb4f142fcd6a68652e153527c4e1e22cbdd62c49f6941853b3ba723ca2 
	I0731 10:18:28.222163    3892 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 10:18:28.222175    3892 cni.go:84] Creating CNI manager for ""
	I0731 10:18:28.222182    3892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:18:28.228499    3892 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 10:18:28.232564    3892 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 10:18:28.235689    3892 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 10:18:28.240422    3892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 10:18:28.240472    3892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:18:28.240476    3892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-608000 minikube.k8s.io/updated_at=2024_07_31T10_18_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1d737dad7efa60c56d30434fcd857dd3b14c91d9 minikube.k8s.io/name=stopped-upgrade-608000 minikube.k8s.io/primary=true
	I0731 10:18:28.278996    3892 kubeadm.go:1113] duration metric: took 38.552709ms to wait for elevateKubeSystemPrivileges
	I0731 10:18:28.279010    3892 ops.go:34] apiserver oom_adj: -16
	I0731 10:18:28.289607    3892 kubeadm.go:394] duration metric: took 4m12.571124791s to StartCluster
	I0731 10:18:28.289626    3892 settings.go:142] acquiring lock: {Name:mk1157c8925fbba0a07eb1816bde23d6b74fa3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:18:28.289718    3892 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:18:28.290123    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/kubeconfig: {Name:mk7aa7304788739007dd5984d3ae0414f5535a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:18:28.290810    3892 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:18:28.290824    3892 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 10:18:28.290856    3892 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-608000"
	I0731 10:18:28.290868    3892 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-608000"
	W0731 10:18:28.290871    3892 addons.go:243] addon storage-provisioner should already be in state true
	I0731 10:18:28.290882    3892 host.go:66] Checking if "stopped-upgrade-608000" exists ...
	I0731 10:18:28.290883    3892 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-608000"
	I0731 10:18:28.290893    3892 config.go:182] Loaded profile config "stopped-upgrade-608000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 10:18:28.290897    3892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-608000"
	I0731 10:18:28.291781    3892 kapi.go:59] client config for stopped-upgrade-608000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/stopped-upgrade-608000/client.key", CAFile:"/Users/jenkins/minikube-integration/19349-1049/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105cb81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 10:18:28.291896    3892 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-608000"
	W0731 10:18:28.291901    3892 addons.go:243] addon default-storageclass should already be in state true
	I0731 10:18:28.291907    3892 host.go:66] Checking if "stopped-upgrade-608000" exists ...
	I0731 10:18:28.293549    3892 out.go:177] * Verifying Kubernetes components...
	I0731 10:18:28.293856    3892 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 10:18:28.293943    3892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 10:18:28.293949    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:18:28.297554    3892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:18:28.301487    3892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:18:28.305477    3892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:18:28.305483    3892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 10:18:28.305489    3892 sshutil.go:53] new ssh client: &{IP:localhost Port:50281 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/stopped-upgrade-608000/id_rsa Username:docker}
	I0731 10:18:28.381263    3892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 10:18:28.386393    3892 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:18:28.386432    3892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:18:28.390284    3892 api_server.go:72] duration metric: took 99.463084ms to wait for apiserver process to appear ...
	I0731 10:18:28.390292    3892 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:18:28.390298    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:28.414390    3892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 10:18:28.429698    3892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:18:33.392348    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:33.392382    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:38.392628    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:38.392653    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:43.392894    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:43.392913    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:48.393225    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:48.393268    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:53.393908    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:53.393955    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:18:58.394607    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:18:58.394626    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 10:18:58.760336    3892 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 10:18:58.766479    3892 out.go:177] * Enabled addons: storage-provisioner
	I0731 10:18:58.773418    3892 addons.go:510] duration metric: took 30.483093667s for enable addons: enabled=[storage-provisioner]
	I0731 10:19:03.395482    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:03.395534    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:08.396921    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:08.396947    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:13.397758    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:13.397774    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:18.399431    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:18.399476    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:23.399997    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:23.400035    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:28.402189    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:28.402310    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:28.415897    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:28.415971    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:28.426737    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:28.426811    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:28.443696    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:28.443759    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:28.454150    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:28.454218    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:28.464799    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:28.464873    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:28.476791    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:28.476860    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:28.487366    3892 logs.go:276] 0 containers: []
	W0731 10:19:28.487378    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:28.487433    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:28.498518    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:28.498537    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:28.498541    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:28.535673    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:28.535685    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:28.576814    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:28.576823    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:28.591975    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:28.591987    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:28.608154    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:28.608167    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:28.626631    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:28.626645    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:28.639678    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:28.639692    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:28.646806    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:28.646815    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:28.662439    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:28.662454    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:28.677885    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:28.677897    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:28.691012    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:28.691023    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:28.704284    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:28.704295    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:28.728831    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:28.728849    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:31.243516    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:36.246017    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:36.246205    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:36.259681    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:36.259748    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:36.270621    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:36.270693    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:36.281687    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:36.281751    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:36.304645    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:36.304712    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:36.314963    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:36.315036    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:36.326321    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:36.326397    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:36.336142    3892 logs.go:276] 0 containers: []
	W0731 10:19:36.336153    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:36.336209    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:36.347099    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:36.347114    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:36.347121    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:36.358773    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:36.358782    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:36.381839    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:36.381847    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:36.399554    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:36.399565    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:36.411386    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:36.411401    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:36.422688    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:36.422701    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:36.440292    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:36.440303    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:36.459979    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:36.459994    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:36.487416    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:36.487426    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:36.498806    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:36.498819    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:36.537875    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:36.537885    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:36.542670    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:36.542677    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:36.580135    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:36.580151    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:39.096402    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:44.098657    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:44.098744    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:44.111008    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:44.111080    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:44.123411    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:44.123479    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:44.136817    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:44.136895    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:44.148704    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:44.148781    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:44.161264    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:44.161336    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:44.177888    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:44.177961    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:44.193220    3892 logs.go:276] 0 containers: []
	W0731 10:19:44.193232    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:44.193294    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:44.205122    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:44.205136    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:44.205142    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:44.209838    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:44.209852    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:44.222707    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:44.222719    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:44.239205    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:44.239217    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:44.253828    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:44.253840    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:44.273722    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:44.273735    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:44.298530    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:44.298538    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:44.337088    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:44.337096    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:44.374265    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:44.374277    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:44.389066    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:44.389079    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:44.403012    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:44.403023    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:44.415125    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:44.415136    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:44.426981    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:44.426991    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:46.941581    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:51.944187    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:51.944413    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:51.968717    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:51.968820    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:51.985260    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:51.985345    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:51.999781    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:51.999861    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:52.011094    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:52.011164    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:52.022093    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:52.022161    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:52.033293    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:52.033363    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:52.044679    3892 logs.go:276] 0 containers: []
	W0731 10:19:52.044692    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:52.044751    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:52.055667    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:52.055682    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:52.055688    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:52.069536    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:19:52.069550    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:19:52.108073    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:52.108082    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:52.144077    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:52.144089    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:52.159595    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:52.159606    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:52.174655    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:19:52.174665    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:19:52.189122    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:19:52.189133    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:19:52.208956    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:52.208972    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:52.221444    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:19:52.221458    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:19:52.244713    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:19:52.244721    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:19:52.248781    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:52.248786    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:52.264207    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:52.264221    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:19:52.276170    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:19:52.276184    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:19:54.793572    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:19:59.795671    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:19:59.795769    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:19:59.808620    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:19:59.808685    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:19:59.820929    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:19:59.820999    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:19:59.832633    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:19:59.832700    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:19:59.843400    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:19:59.843471    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:19:59.854536    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:19:59.854604    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:19:59.865390    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:19:59.865459    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:19:59.875986    3892 logs.go:276] 0 containers: []
	W0731 10:19:59.875996    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:19:59.876048    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:19:59.887041    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:19:59.887061    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:19:59.887066    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:19:59.899186    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:19:59.899197    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:19:59.915227    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:19:59.915237    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:19:59.927505    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:19:59.927516    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:19:59.962903    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:19:59.962913    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:19:59.977536    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:19:59.977546    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:19:59.989617    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:19:59.989629    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:00.001999    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:00.002010    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:00.017377    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:00.017387    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:00.030600    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:00.030610    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:00.067979    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:00.067992    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:00.072689    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:00.072697    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:00.090913    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:00.090924    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:02.617914    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:07.620105    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:07.620294    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:07.634839    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:07.634915    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:07.649003    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:07.649071    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:07.660776    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:07.660844    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:07.671782    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:07.671846    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:07.683369    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:07.683443    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:07.695212    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:07.695292    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:07.706963    3892 logs.go:276] 0 containers: []
	W0731 10:20:07.706975    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:07.707035    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:07.721557    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:07.721575    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:07.721582    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:07.758728    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:07.758736    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:07.794930    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:07.794944    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:07.814897    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:07.814911    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:07.827149    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:07.827159    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:07.842652    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:07.842662    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:07.855266    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:07.855276    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:07.873477    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:07.873487    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:07.886445    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:07.886456    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:07.910488    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:07.910497    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:07.915176    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:07.915182    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:07.929907    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:07.929919    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:07.947386    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:07.947397    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:10.461760    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:15.463925    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:15.464079    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:15.478937    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:15.479013    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:15.490439    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:15.490513    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:15.500832    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:15.500904    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:15.511111    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:15.511178    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:15.522128    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:15.522199    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:15.532754    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:15.532813    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:15.543969    3892 logs.go:276] 0 containers: []
	W0731 10:20:15.543981    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:15.544049    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:15.554630    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:15.554644    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:15.554649    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:15.579631    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:15.579639    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:15.617946    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:15.617957    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:15.655282    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:15.655295    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:15.673862    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:15.673874    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:15.685713    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:15.685724    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:15.697566    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:15.697580    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:15.714537    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:15.714549    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:15.727310    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:15.727322    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:15.732007    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:15.732018    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:15.746676    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:15.746687    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:15.762787    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:15.762798    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:15.774977    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:15.774988    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:18.291643    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:23.293843    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:23.294039    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:23.314250    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:23.314330    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:23.326834    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:23.326910    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:23.338344    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:23.338403    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:23.348431    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:23.348502    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:23.359213    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:23.359306    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:23.370069    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:23.370134    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:23.380109    3892 logs.go:276] 0 containers: []
	W0731 10:20:23.380119    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:23.380171    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:23.391553    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:23.391568    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:23.391573    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:23.395824    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:23.395829    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:23.409476    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:23.409490    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:23.420948    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:23.420962    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:23.435971    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:23.435981    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:23.460342    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:23.460350    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:23.472647    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:23.472661    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:23.509793    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:23.509800    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:23.523498    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:23.523512    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:23.535267    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:23.535280    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:23.547254    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:23.547268    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:23.564527    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:23.564541    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:23.576246    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:23.576259    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:26.114433    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:31.116578    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:31.116731    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:31.129668    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:31.129744    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:31.140514    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:31.140582    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:31.151300    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:31.151369    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:31.161915    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:31.161978    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:31.172785    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:31.172860    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:31.187069    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:31.187132    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:31.197201    3892 logs.go:276] 0 containers: []
	W0731 10:20:31.197215    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:31.197268    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:31.208118    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:31.208133    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:31.208139    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:31.246984    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:31.246996    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:31.261753    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:31.261765    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:31.273911    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:31.273926    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:31.285666    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:31.285675    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:31.297367    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:31.297380    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:31.314830    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:31.314841    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:31.352115    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:31.352122    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:31.356358    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:31.356364    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:31.370188    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:31.370198    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:31.384958    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:31.384971    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:31.396045    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:31.396055    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:31.419331    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:31.419341    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:33.934643    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:38.935043    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:38.935332    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:38.975064    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:38.975178    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:38.991030    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:38.991108    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:39.006854    3892 logs.go:276] 2 containers: [bfa7b7af9caa e524fc97276c]
	I0731 10:20:39.006926    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:39.017746    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:39.017813    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:39.027994    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:39.028062    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:39.038907    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:39.038976    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:39.049636    3892 logs.go:276] 0 containers: []
	W0731 10:20:39.049648    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:39.049705    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:39.060111    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:39.060126    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:39.060131    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:39.074785    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:39.074794    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:39.089329    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:39.089342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:39.101309    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:39.101322    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:39.125784    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:39.125797    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:39.137348    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:39.137360    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:39.149328    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:39.149340    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:39.188467    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:39.188477    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:39.192739    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:39.192748    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:39.228145    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:39.228155    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:39.239612    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:39.239624    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:39.254932    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:39.254945    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:39.267371    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:39.267384    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:41.785621    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:46.787927    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:46.788164    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:46.815339    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:46.815455    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:46.833332    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:46.833411    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:46.846886    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:20:46.846987    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:46.859539    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:46.859607    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:46.870488    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:46.870553    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:46.880925    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:46.880995    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:46.904338    3892 logs.go:276] 0 containers: []
	W0731 10:20:46.904352    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:46.904407    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:46.917070    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:46.917088    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:46.917093    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:46.932398    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:46.932408    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:46.944270    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:46.944283    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:46.961849    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:46.961859    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:46.973755    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:46.973766    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:46.978572    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:20:46.978579    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:20:46.990639    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:20:46.990649    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:20:47.002224    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:47.002236    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:47.038686    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:47.038696    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:47.051433    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:47.051444    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:47.075014    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:47.075023    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:47.086918    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:47.086930    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:47.099325    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:47.099336    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:47.137788    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:47.137798    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:47.151902    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:47.151912    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:49.667735    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:20:54.670090    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:20:54.670754    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:20:54.698561    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:20:54.698650    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:20:54.715839    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:20:54.715903    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:20:54.729613    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:20:54.729678    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:20:54.741587    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:20:54.741640    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:20:54.752056    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:20:54.752111    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:20:54.762722    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:20:54.762775    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:20:54.773263    3892 logs.go:276] 0 containers: []
	W0731 10:20:54.773272    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:20:54.773319    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:20:54.784072    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:20:54.784089    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:20:54.784095    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:20:54.823624    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:20:54.823634    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:20:54.827902    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:20:54.827911    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:20:54.842796    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:20:54.842806    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:20:54.878211    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:20:54.878224    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:20:54.890400    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:20:54.890409    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:20:54.902900    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:20:54.902911    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:20:54.928782    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:20:54.928792    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:20:54.947468    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:20:54.947481    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:20:54.959981    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:20:54.959991    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:20:54.977646    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:20:54.977657    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:20:54.990800    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:20:54.990813    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:20:55.005458    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:20:55.005469    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:20:55.017135    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:20:55.017145    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:20:55.029857    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:20:55.029868    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:20:57.544532    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:02.546699    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:02.546956    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:02.572123    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:02.572224    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:02.588333    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:02.588423    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:02.602469    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:02.602546    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:02.613572    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:02.613643    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:02.624418    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:02.624487    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:02.634959    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:02.635028    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:02.646320    3892 logs.go:276] 0 containers: []
	W0731 10:21:02.646330    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:02.646390    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:02.656778    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:02.656796    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:02.656802    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:02.699504    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:02.699519    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:02.711581    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:02.711592    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:02.715869    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:02.715877    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:02.730120    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:02.730130    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:02.742005    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:02.742015    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:02.769845    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:02.769857    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:02.781864    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:02.781874    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:02.807446    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:02.807453    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:02.845379    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:02.845387    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:02.859669    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:02.859682    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:02.871580    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:02.871593    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:02.886173    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:02.886185    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:02.897554    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:02.897564    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:02.909155    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:02.909167    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:05.423683    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:10.425932    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:10.426147    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:10.445411    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:10.445501    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:10.460759    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:10.460829    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:10.473200    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:10.473279    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:10.483997    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:10.484060    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:10.494572    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:10.494645    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:10.505239    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:10.505311    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:10.515362    3892 logs.go:276] 0 containers: []
	W0731 10:21:10.515375    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:10.515433    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:10.526028    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:10.526045    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:10.526051    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:10.530383    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:10.530389    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:10.544995    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:10.545008    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:10.557332    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:10.557342    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:10.582723    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:10.582731    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:10.593875    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:10.593884    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:10.631838    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:10.631850    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:10.668545    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:10.668562    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:10.683675    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:10.683688    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:10.698183    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:10.698194    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:10.710327    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:10.710341    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:10.722110    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:10.722121    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:10.734404    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:10.734416    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:10.749133    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:10.749144    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:10.760540    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:10.760551    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:13.279856    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:18.281956    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:18.282085    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:18.292542    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:18.292612    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:18.306096    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:18.306157    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:18.316822    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:18.316898    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:18.327245    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:18.327305    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:18.338228    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:18.338291    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:18.349138    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:18.349205    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:18.359561    3892 logs.go:276] 0 containers: []
	W0731 10:21:18.359574    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:18.359632    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:18.375171    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:18.375189    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:18.375202    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:18.380064    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:18.380073    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:18.391545    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:18.391560    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:18.416636    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:18.416644    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:18.455439    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:18.455449    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:18.470610    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:18.470622    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:18.483064    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:18.483075    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:18.496321    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:18.496336    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:18.509279    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:18.509290    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:18.545714    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:18.545725    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:18.559618    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:18.559628    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:18.571235    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:18.571244    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:18.590156    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:18.590165    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:18.602239    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:18.602249    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:18.619601    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:18.619616    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:21.133255    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:26.135504    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:26.135677    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:26.150486    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:26.150565    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:26.164258    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:26.164328    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:26.176147    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:26.176218    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:26.201631    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:26.201704    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:26.216807    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:26.216876    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:26.233215    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:26.233285    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:26.247535    3892 logs.go:276] 0 containers: []
	W0731 10:21:26.247551    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:26.247606    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:26.261963    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:26.261979    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:26.261985    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:26.299193    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:26.299207    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:26.320990    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:26.321002    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:26.337316    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:26.337331    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:26.348726    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:26.348740    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:26.363409    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:26.363420    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:26.380291    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:26.380303    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:26.392127    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:26.392137    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:26.396408    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:26.396415    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:26.408007    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:26.408017    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:26.420098    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:26.420108    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:26.431655    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:26.431664    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:26.471198    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:26.471206    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:26.485398    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:26.485409    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:26.497496    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:26.497507    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:29.027380    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:34.029644    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:34.029889    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:34.048585    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:34.048670    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:34.062098    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:34.062177    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:34.076163    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:34.076234    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:34.088618    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:34.088684    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:34.099609    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:34.099669    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:34.111165    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:34.111235    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:34.121743    3892 logs.go:276] 0 containers: []
	W0731 10:21:34.121759    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:34.121810    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:34.132612    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:34.132629    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:34.132633    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:34.144753    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:34.144766    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:34.157259    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:34.157271    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:34.169630    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:34.169643    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:34.208085    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:34.208103    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:34.213004    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:34.213011    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:34.251626    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:34.251637    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:34.266852    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:34.266863    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:34.281884    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:34.281895    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:34.294763    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:34.294774    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:34.307301    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:34.307313    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:34.322256    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:34.322269    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:34.340673    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:34.340683    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:34.366125    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:34.366132    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:34.380221    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:34.380232    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:36.892961    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:41.894475    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:41.894668    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:41.915939    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:41.916041    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:41.932571    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:41.932647    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:41.948748    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:41.948815    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:41.960186    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:41.960248    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:41.971137    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:41.971201    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:41.982374    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:41.982434    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:41.993411    3892 logs.go:276] 0 containers: []
	W0731 10:21:41.993422    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:41.993472    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:42.014110    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:42.014129    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:42.014134    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:42.027050    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:42.027060    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:42.039163    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:42.039172    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:42.053195    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:42.053203    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:42.067561    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:42.067573    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:42.087995    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:42.088006    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:42.100069    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:42.100079    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:42.125466    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:42.125474    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:42.137838    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:42.137848    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:42.177677    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:42.177694    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:42.190524    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:42.190534    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:42.203593    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:42.203603    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:42.207902    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:42.207909    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:42.248193    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:42.248206    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:42.262320    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:42.262331    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:44.786579    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:49.788948    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:49.789203    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:49.817309    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:49.817410    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:49.834254    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:49.834337    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:49.849089    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:49.849167    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:49.861130    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:49.861200    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:49.872641    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:49.872704    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:49.883559    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:49.883622    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:49.894281    3892 logs.go:276] 0 containers: []
	W0731 10:21:49.894299    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:49.894355    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:49.905066    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:49.905082    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:49.905089    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:49.927151    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:49.927163    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:49.945021    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:49.945030    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:49.984732    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:49.984742    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:50.020645    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:50.020659    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:50.035578    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:50.035587    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:50.047561    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:50.047572    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:50.059690    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:50.059700    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:50.075096    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:50.075110    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:50.079498    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:50.079503    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:50.093746    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:50.093760    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:50.105675    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:50.105685    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:50.117827    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:50.117839    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:21:50.130292    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:50.130307    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:50.142786    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:50.142800    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:52.670027    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:21:57.670438    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:21:57.670560    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:21:57.688162    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:21:57.688248    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:21:57.701377    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:21:57.701449    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:21:57.712624    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:21:57.712699    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:21:57.723529    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:21:57.723599    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:21:57.734473    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:21:57.734545    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:21:57.745133    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:21:57.745198    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:21:57.755691    3892 logs.go:276] 0 containers: []
	W0731 10:21:57.755702    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:21:57.755763    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:21:57.766027    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:21:57.766048    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:21:57.766054    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:21:57.805071    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:21:57.805080    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:21:57.820060    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:21:57.820070    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:21:57.832083    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:21:57.832095    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:21:57.855599    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:21:57.855606    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:21:57.873434    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:21:57.873448    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:21:57.888390    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:21:57.888404    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:21:57.906410    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:21:57.906422    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:21:57.917821    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:21:57.917832    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:21:57.956726    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:21:57.956737    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:21:57.969023    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:21:57.969036    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:21:57.973996    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:21:57.974002    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:21:57.986331    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:21:57.986342    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:21:57.999687    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:21:57.999698    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:21:58.012890    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:21:58.012902    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:22:00.526863    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:22:05.529068    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:22:05.529225    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:22:05.546938    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:22:05.547012    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:22:05.558375    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:22:05.558436    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:22:05.569025    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:22:05.569094    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:22:05.580695    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:22:05.580770    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:22:05.591365    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:22:05.591440    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:22:05.602199    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:22:05.602274    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:22:05.612560    3892 logs.go:276] 0 containers: []
	W0731 10:22:05.612573    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:22:05.612631    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:22:05.622685    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:22:05.622701    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:22:05.622707    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:22:05.637485    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:22:05.637497    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:22:05.652216    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:22:05.652226    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:22:05.673742    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:22:05.673756    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:22:05.685682    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:22:05.685694    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:22:05.722574    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:22:05.722582    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:22:05.760382    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:22:05.760391    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:22:05.772179    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:22:05.772194    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:22:05.785126    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:22:05.785136    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:22:05.796627    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:22:05.796638    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:22:05.810414    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:22:05.810428    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:22:05.825001    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:22:05.825016    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:22:05.837232    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:22:05.837242    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:22:05.861126    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:22:05.861136    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:22:05.865737    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:22:05.865745    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:22:08.378801    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:22:13.381060    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:22:13.381342    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:22:13.406189    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:22:13.406311    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:22:13.423098    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:22:13.423190    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:22:13.436316    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:22:13.436381    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:22:13.449157    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:22:13.449228    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:22:13.464601    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:22:13.464664    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:22:13.475661    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:22:13.475725    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:22:13.485370    3892 logs.go:276] 0 containers: []
	W0731 10:22:13.485383    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:22:13.485434    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:22:13.496099    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:22:13.496114    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:22:13.496119    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:22:13.507792    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:22:13.507802    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:22:13.519165    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:22:13.519175    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:22:13.533518    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:22:13.533528    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:22:13.568694    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:22:13.568703    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:22:13.582794    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:22:13.582804    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:22:13.595705    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:22:13.595721    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:22:13.607130    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:22:13.607139    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:22:13.631624    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:22:13.631631    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:22:13.670913    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:22:13.670923    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:22:13.682876    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:22:13.682886    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:22:13.694864    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:22:13.694874    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:22:13.712256    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:22:13.712265    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:22:13.727114    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:22:13.727125    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:22:13.738670    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:22:13.738683    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:22:16.244996    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:22:21.247393    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:22:21.247797    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 10:22:21.287037    3892 logs.go:276] 1 containers: [3f64fd887a8f]
	I0731 10:22:21.287169    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 10:22:21.309154    3892 logs.go:276] 1 containers: [9e5b8aa2f3a9]
	I0731 10:22:21.309264    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 10:22:21.324220    3892 logs.go:276] 4 containers: [96bfe1c90f2d 467687d69a56 bfa7b7af9caa e524fc97276c]
	I0731 10:22:21.324305    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 10:22:21.337050    3892 logs.go:276] 1 containers: [bce96cefdc9a]
	I0731 10:22:21.337119    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 10:22:21.347894    3892 logs.go:276] 1 containers: [470c14adb355]
	I0731 10:22:21.347961    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 10:22:21.358325    3892 logs.go:276] 1 containers: [506f9507bee9]
	I0731 10:22:21.358391    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 10:22:21.368102    3892 logs.go:276] 0 containers: []
	W0731 10:22:21.368113    3892 logs.go:278] No container was found matching "kindnet"
	I0731 10:22:21.368168    3892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 10:22:21.378812    3892 logs.go:276] 1 containers: [9be17e1aefd8]
	I0731 10:22:21.378830    3892 logs.go:123] Gathering logs for coredns [bfa7b7af9caa] ...
	I0731 10:22:21.378837    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfa7b7af9caa"
	I0731 10:22:21.390662    3892 logs.go:123] Gathering logs for kubelet ...
	I0731 10:22:21.390673    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 10:22:21.426812    3892 logs.go:123] Gathering logs for describe nodes ...
	I0731 10:22:21.426820    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 10:22:21.467096    3892 logs.go:123] Gathering logs for kube-apiserver [3f64fd887a8f] ...
	I0731 10:22:21.467106    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f64fd887a8f"
	I0731 10:22:21.481706    3892 logs.go:123] Gathering logs for coredns [96bfe1c90f2d] ...
	I0731 10:22:21.481717    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96bfe1c90f2d"
	I0731 10:22:21.493661    3892 logs.go:123] Gathering logs for coredns [467687d69a56] ...
	I0731 10:22:21.493673    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467687d69a56"
	I0731 10:22:21.505930    3892 logs.go:123] Gathering logs for storage-provisioner [9be17e1aefd8] ...
	I0731 10:22:21.505939    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be17e1aefd8"
	I0731 10:22:21.517350    3892 logs.go:123] Gathering logs for dmesg ...
	I0731 10:22:21.517361    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 10:22:21.521797    3892 logs.go:123] Gathering logs for Docker ...
	I0731 10:22:21.521802    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 10:22:21.544099    3892 logs.go:123] Gathering logs for etcd [9e5b8aa2f3a9] ...
	I0731 10:22:21.544107    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e5b8aa2f3a9"
	I0731 10:22:21.557555    3892 logs.go:123] Gathering logs for coredns [e524fc97276c] ...
	I0731 10:22:21.557564    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e524fc97276c"
	I0731 10:22:21.569197    3892 logs.go:123] Gathering logs for kube-scheduler [bce96cefdc9a] ...
	I0731 10:22:21.569208    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bce96cefdc9a"
	I0731 10:22:21.584127    3892 logs.go:123] Gathering logs for kube-proxy [470c14adb355] ...
	I0731 10:22:21.584137    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 470c14adb355"
	I0731 10:22:21.595945    3892 logs.go:123] Gathering logs for kube-controller-manager [506f9507bee9] ...
	I0731 10:22:21.595955    3892 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 506f9507bee9"
	I0731 10:22:21.613448    3892 logs.go:123] Gathering logs for container status ...
	I0731 10:22:21.613458    3892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 10:22:24.126927    3892 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 10:22:29.129488    3892 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 10:22:29.136925    3892 out.go:177] 
	W0731 10:22:29.140703    3892 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 10:22:29.140731    3892 out.go:239] * 
	* 
	W0731 10:22:29.143251    3892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:22:29.158618    3892 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-608000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (610.22s)

                                                
                                    
x
+
TestPause/serial/Start (9.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-280000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-280000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.810115333s)

                                                
                                                
-- stdout --
	* [pause-280000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-280000" primary control-plane node in "pause-280000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-280000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-280000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-280000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-280000 -n pause-280000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-280000 -n pause-280000: exit status 7 (30.675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-280000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-330000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-330000 --driver=qemu2 : exit status 80 (9.727843958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-330000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-330000" primary control-plane node in "NoKubernetes-330000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-330000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-330000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-330000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000: exit status 7 (65.378625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --driver=qemu2 : exit status 80 (5.238346667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-330000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-330000
	* Restarting existing qemu2 VM for "NoKubernetes-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-330000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000: exit status 7 (64.979625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244293042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-330000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-330000
	* Restarting existing qemu2 VM for "NoKubernetes-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-330000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000: exit status 7 (69.020667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-330000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-330000 --driver=qemu2 : exit status 80 (7.009452417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-330000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-330000
	* Restarting existing qemu2 VM for "NoKubernetes-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-330000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-330000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-330000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-330000 -n NoKubernetes-330000: exit status 7 (29.633834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-330000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19349
- KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current339268473/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.68s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
E0731 10:23:14.277546    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19349
- KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1042127292/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.754199042s)

                                                
                                                
-- stdout --
	* [auto-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-336000" primary control-plane node in "auto-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:23:49.979422    4296 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:23:49.979723    4296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:49.979728    4296 out.go:304] Setting ErrFile to fd 2...
	I0731 10:23:49.979730    4296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:23:49.979976    4296 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:23:49.981429    4296 out.go:298] Setting JSON to false
	I0731 10:23:49.997859    4296 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3193,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:23:49.997925    4296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:23:50.013795    4296 out.go:177] * [auto-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:23:50.016846    4296 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:23:50.016874    4296 notify.go:220] Checking for updates...
	I0731 10:23:50.022762    4296 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:23:50.025787    4296 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:23:50.028830    4296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:23:50.031827    4296 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:23:50.034797    4296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:23:50.038147    4296 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:23:50.038224    4296 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:23:50.038294    4296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:23:50.042667    4296 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:23:50.049791    4296 start.go:297] selected driver: qemu2
	I0731 10:23:50.049799    4296 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:23:50.049806    4296 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:23:50.052239    4296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:23:50.054719    4296 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:23:50.057839    4296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:23:50.057868    4296 cni.go:84] Creating CNI manager for ""
	I0731 10:23:50.057875    4296 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:23:50.057880    4296 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:23:50.057909    4296 start.go:340] cluster config:
	{Name:auto-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:23:50.061620    4296 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:23:50.068754    4296 out.go:177] * Starting "auto-336000" primary control-plane node in "auto-336000" cluster
	I0731 10:23:50.072778    4296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:23:50.072794    4296 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:23:50.072807    4296 cache.go:56] Caching tarball of preloaded images
	I0731 10:23:50.072867    4296 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:23:50.072873    4296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:23:50.072952    4296 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/auto-336000/config.json ...
	I0731 10:23:50.072964    4296 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/auto-336000/config.json: {Name:mk8dfe8b5739375624cf35ee7277525cbc4e84ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:23:50.073180    4296 start.go:360] acquireMachinesLock for auto-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:50.073213    4296 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "auto-336000"
	I0731 10:23:50.073224    4296 start.go:93] Provisioning new machine with config: &{Name:auto-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:50.073263    4296 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:50.081825    4296 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:23:50.099552    4296 start.go:159] libmachine.API.Create for "auto-336000" (driver="qemu2")
	I0731 10:23:50.099582    4296 client.go:168] LocalClient.Create starting
	I0731 10:23:50.099651    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:50.099681    4296 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:50.099691    4296 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:50.099724    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:50.099748    4296 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:50.099757    4296 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:50.100120    4296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:50.250689    4296 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:50.323779    4296 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:50.323784    4296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:50.323964    4296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2
	I0731 10:23:50.333325    4296 main.go:141] libmachine: STDOUT: 
	I0731 10:23:50.333343    4296 main.go:141] libmachine: STDERR: 
	I0731 10:23:50.333400    4296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2 +20000M
	I0731 10:23:50.341189    4296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:50.341203    4296 main.go:141] libmachine: STDERR: 
	I0731 10:23:50.341221    4296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2
	I0731 10:23:50.341227    4296 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:50.341239    4296 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:50.341268    4296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:61:00:9d:be:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2
	I0731 10:23:50.342899    4296 main.go:141] libmachine: STDOUT: 
	I0731 10:23:50.342915    4296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:50.342941    4296 client.go:171] duration metric: took 243.358042ms to LocalClient.Create
	I0731 10:23:52.345091    4296 start.go:128] duration metric: took 2.271842916s to createHost
	I0731 10:23:52.345214    4296 start.go:83] releasing machines lock for "auto-336000", held for 2.271978584s
	W0731 10:23:52.345307    4296 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:52.356357    4296 out.go:177] * Deleting "auto-336000" in qemu2 ...
	W0731 10:23:52.388873    4296 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:52.388901    4296 start.go:729] Will try again in 5 seconds ...
	I0731 10:23:57.391033    4296 start.go:360] acquireMachinesLock for auto-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:23:57.391494    4296 start.go:364] duration metric: took 334.083µs to acquireMachinesLock for "auto-336000"
	I0731 10:23:57.391607    4296 start.go:93] Provisioning new machine with config: &{Name:auto-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:23:57.391942    4296 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:23:57.402249    4296 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:23:57.453070    4296 start.go:159] libmachine.API.Create for "auto-336000" (driver="qemu2")
	I0731 10:23:57.453115    4296 client.go:168] LocalClient.Create starting
	I0731 10:23:57.453221    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:23:57.453283    4296 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:57.453304    4296 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:57.453363    4296 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:23:57.453407    4296 main.go:141] libmachine: Decoding PEM data...
	I0731 10:23:57.453418    4296 main.go:141] libmachine: Parsing certificate...
	I0731 10:23:57.453942    4296 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:23:57.616766    4296 main.go:141] libmachine: Creating SSH key...
	I0731 10:23:57.641494    4296 main.go:141] libmachine: Creating Disk image...
	I0731 10:23:57.641503    4296 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:23:57.641687    4296 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2
	I0731 10:23:57.650718    4296 main.go:141] libmachine: STDOUT: 
	I0731 10:23:57.650738    4296 main.go:141] libmachine: STDERR: 
	I0731 10:23:57.650782    4296 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2 +20000M
	I0731 10:23:57.658594    4296 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:23:57.658611    4296 main.go:141] libmachine: STDERR: 
	I0731 10:23:57.658625    4296 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2
	I0731 10:23:57.658630    4296 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:23:57.658643    4296 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:23:57.658669    4296 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:9b:1a:a5:86:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/auto-336000/disk.qcow2
	I0731 10:23:57.660297    4296 main.go:141] libmachine: STDOUT: 
	I0731 10:23:57.660313    4296 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:23:57.660324    4296 client.go:171] duration metric: took 207.208667ms to LocalClient.Create
	I0731 10:23:59.662537    4296 start.go:128] duration metric: took 2.270600166s to createHost
	I0731 10:23:59.662626    4296 start.go:83] releasing machines lock for "auto-336000", held for 2.271119792s
	W0731 10:23:59.663029    4296 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:23:59.677540    4296 out.go:177] 
	W0731 10:23:59.681740    4296 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:23:59.681764    4296 out.go:239] * 
	* 
	W0731 10:23:59.684324    4296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:23:59.692570    4296 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.877869375s)

                                                
                                                
-- stdout --
	* [kindnet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-336000" primary control-plane node in "kindnet-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:24:01.824246    4407 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:24:01.824360    4407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:01.824363    4407 out.go:304] Setting ErrFile to fd 2...
	I0731 10:24:01.824365    4407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:01.824491    4407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:24:01.825521    4407 out.go:298] Setting JSON to false
	I0731 10:24:01.841562    4407 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3205,"bootTime":1722443436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:24:01.841645    4407 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:24:01.847211    4407 out.go:177] * [kindnet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:24:01.854059    4407 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:24:01.854131    4407 notify.go:220] Checking for updates...
	I0731 10:24:01.860134    4407 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:24:01.863099    4407 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:24:01.866163    4407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:24:01.869152    4407 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:24:01.870598    4407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:24:01.874491    4407 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:01.874574    4407 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:01.874626    4407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:24:01.879117    4407 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:24:01.884072    4407 start.go:297] selected driver: qemu2
	I0731 10:24:01.884080    4407 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:24:01.884088    4407 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:24:01.886383    4407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:24:01.889117    4407 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:24:01.892261    4407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:24:01.892288    4407 cni.go:84] Creating CNI manager for "kindnet"
	I0731 10:24:01.892292    4407 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 10:24:01.892316    4407 start.go:340] cluster config:
	{Name:kindnet-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:24:01.895851    4407 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:24:01.903086    4407 out.go:177] * Starting "kindnet-336000" primary control-plane node in "kindnet-336000" cluster
	I0731 10:24:01.907118    4407 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:24:01.907138    4407 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:24:01.907150    4407 cache.go:56] Caching tarball of preloaded images
	I0731 10:24:01.907211    4407 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:24:01.907218    4407 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:24:01.907278    4407 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/kindnet-336000/config.json ...
	I0731 10:24:01.907289    4407 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/kindnet-336000/config.json: {Name:mk579440afd809b8a69f635cb9825ff1a035167b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:24:01.907513    4407 start.go:360] acquireMachinesLock for kindnet-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:01.907547    4407 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "kindnet-336000"
	I0731 10:24:01.907558    4407 start.go:93] Provisioning new machine with config: &{Name:kindnet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:01.907590    4407 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:01.914033    4407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:01.931728    4407 start.go:159] libmachine.API.Create for "kindnet-336000" (driver="qemu2")
	I0731 10:24:01.931755    4407 client.go:168] LocalClient.Create starting
	I0731 10:24:01.931811    4407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:01.931839    4407 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:01.931849    4407 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:01.931885    4407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:01.931910    4407 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:01.931923    4407 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:01.932310    4407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:02.083210    4407 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:02.214158    4407 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:02.214165    4407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:02.214361    4407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2
	I0731 10:24:02.223760    4407 main.go:141] libmachine: STDOUT: 
	I0731 10:24:02.223781    4407 main.go:141] libmachine: STDERR: 
	I0731 10:24:02.223842    4407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2 +20000M
	I0731 10:24:02.231645    4407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:02.231673    4407 main.go:141] libmachine: STDERR: 
	I0731 10:24:02.231690    4407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2
	I0731 10:24:02.231695    4407 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:02.231706    4407 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:02.231741    4407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:0e:9c:d7:4f:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2
	I0731 10:24:02.233348    4407 main.go:141] libmachine: STDOUT: 
	I0731 10:24:02.233364    4407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:02.233381    4407 client.go:171] duration metric: took 301.6265ms to LocalClient.Create
	I0731 10:24:04.235523    4407 start.go:128] duration metric: took 2.327948083s to createHost
	I0731 10:24:04.235591    4407 start.go:83] releasing machines lock for "kindnet-336000", held for 2.32806875s
	W0731 10:24:04.235676    4407 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:04.248887    4407 out.go:177] * Deleting "kindnet-336000" in qemu2 ...
	W0731 10:24:04.280129    4407 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:04.280156    4407 start.go:729] Will try again in 5 seconds ...
	I0731 10:24:09.281820    4407 start.go:360] acquireMachinesLock for kindnet-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:09.282263    4407 start.go:364] duration metric: took 341.125µs to acquireMachinesLock for "kindnet-336000"
	I0731 10:24:09.282380    4407 start.go:93] Provisioning new machine with config: &{Name:kindnet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:09.282685    4407 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:09.299468    4407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:09.350399    4407 start.go:159] libmachine.API.Create for "kindnet-336000" (driver="qemu2")
	I0731 10:24:09.350443    4407 client.go:168] LocalClient.Create starting
	I0731 10:24:09.350560    4407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:09.350618    4407 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:09.350633    4407 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:09.350693    4407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:09.350734    4407 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:09.350754    4407 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:09.351278    4407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:09.514041    4407 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:09.608879    4407 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:09.608885    4407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:09.609061    4407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2
	I0731 10:24:09.618176    4407 main.go:141] libmachine: STDOUT: 
	I0731 10:24:09.618265    4407 main.go:141] libmachine: STDERR: 
	I0731 10:24:09.618315    4407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2 +20000M
	I0731 10:24:09.626116    4407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:09.626131    4407 main.go:141] libmachine: STDERR: 
	I0731 10:24:09.626143    4407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2
	I0731 10:24:09.626148    4407 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:09.626159    4407 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:09.626184    4407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:62:d5:16:7e:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kindnet-336000/disk.qcow2
	I0731 10:24:09.627720    4407 main.go:141] libmachine: STDOUT: 
	I0731 10:24:09.627800    4407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:09.627812    4407 client.go:171] duration metric: took 277.367459ms to LocalClient.Create
	I0731 10:24:11.629995    4407 start.go:128] duration metric: took 2.347308625s to createHost
	I0731 10:24:11.630079    4407 start.go:83] releasing machines lock for "kindnet-336000", held for 2.347828959s
	W0731 10:24:11.630375    4407 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:11.643795    4407 out.go:177] 
	W0731 10:24:11.649046    4407 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:24:11.649092    4407 out.go:239] * 
	* 
	W0731 10:24:11.651953    4407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:24:11.660969    4407 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.793915958s)

                                                
                                                
-- stdout --
	* [flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-336000" primary control-plane node in "flannel-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:24:13.905300    4520 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:24:13.905464    4520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:13.905467    4520 out.go:304] Setting ErrFile to fd 2...
	I0731 10:24:13.905470    4520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:13.905588    4520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:24:13.906641    4520 out.go:298] Setting JSON to false
	I0731 10:24:13.922664    4520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3217,"bootTime":1722443436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:24:13.922735    4520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:24:13.928228    4520 out.go:177] * [flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:24:13.936206    4520 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:24:13.936247    4520 notify.go:220] Checking for updates...
	I0731 10:24:13.942090    4520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:24:13.945161    4520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:24:13.948218    4520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:24:13.949579    4520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:24:13.952144    4520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:24:13.955574    4520 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:13.955650    4520 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:13.955702    4520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:24:13.960050    4520 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:24:13.967130    4520 start.go:297] selected driver: qemu2
	I0731 10:24:13.967136    4520 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:24:13.967150    4520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:24:13.969311    4520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:24:13.972168    4520 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:24:13.975353    4520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:24:13.975381    4520 cni.go:84] Creating CNI manager for "flannel"
	I0731 10:24:13.975386    4520 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0731 10:24:13.975416    4520 start.go:340] cluster config:
	{Name:flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:24:13.979156    4520 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:24:13.985098    4520 out.go:177] * Starting "flannel-336000" primary control-plane node in "flannel-336000" cluster
	I0731 10:24:13.989204    4520 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:24:13.989222    4520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:24:13.989234    4520 cache.go:56] Caching tarball of preloaded images
	I0731 10:24:13.989295    4520 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:24:13.989302    4520 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:24:13.989363    4520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/flannel-336000/config.json ...
	I0731 10:24:13.989374    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/flannel-336000/config.json: {Name:mkcc5b54864000bda425fe9144aa5ae4d7c510ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:24:13.989598    4520 start.go:360] acquireMachinesLock for flannel-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:13.989632    4520 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "flannel-336000"
	I0731 10:24:13.989642    4520 start.go:93] Provisioning new machine with config: &{Name:flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:13.989672    4520 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:13.998162    4520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:14.015551    4520 start.go:159] libmachine.API.Create for "flannel-336000" (driver="qemu2")
	I0731 10:24:14.015584    4520 client.go:168] LocalClient.Create starting
	I0731 10:24:14.015651    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:14.015685    4520 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:14.015695    4520 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:14.015733    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:14.015757    4520 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:14.015771    4520 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:14.016155    4520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:14.166179    4520 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:14.240894    4520 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:14.240900    4520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:14.241091    4520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2
	I0731 10:24:14.250467    4520 main.go:141] libmachine: STDOUT: 
	I0731 10:24:14.250489    4520 main.go:141] libmachine: STDERR: 
	I0731 10:24:14.250550    4520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2 +20000M
	I0731 10:24:14.258419    4520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:14.258431    4520 main.go:141] libmachine: STDERR: 
	I0731 10:24:14.258448    4520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2
	I0731 10:24:14.258453    4520 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:14.258464    4520 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:14.258498    4520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7d:83:4d:a9:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2
	I0731 10:24:14.260112    4520 main.go:141] libmachine: STDOUT: 
	I0731 10:24:14.260124    4520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:14.260141    4520 client.go:171] duration metric: took 244.558334ms to LocalClient.Create
	I0731 10:24:16.262290    4520 start.go:128] duration metric: took 2.272634125s to createHost
	I0731 10:24:16.262344    4520 start.go:83] releasing machines lock for "flannel-336000", held for 2.272737625s
	W0731 10:24:16.262426    4520 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:16.278644    4520 out.go:177] * Deleting "flannel-336000" in qemu2 ...
	W0731 10:24:16.306386    4520 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:16.306409    4520 start.go:729] Will try again in 5 seconds ...
	I0731 10:24:21.308648    4520 start.go:360] acquireMachinesLock for flannel-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:21.309317    4520 start.go:364] duration metric: took 541.084µs to acquireMachinesLock for "flannel-336000"
	I0731 10:24:21.309497    4520 start.go:93] Provisioning new machine with config: &{Name:flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:21.309787    4520 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:21.326689    4520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:21.379364    4520 start.go:159] libmachine.API.Create for "flannel-336000" (driver="qemu2")
	I0731 10:24:21.379413    4520 client.go:168] LocalClient.Create starting
	I0731 10:24:21.379556    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:21.379614    4520 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:21.379631    4520 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:21.379691    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:21.379734    4520 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:21.379745    4520 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:21.380270    4520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:21.561358    4520 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:21.606519    4520 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:21.606524    4520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:21.606692    4520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2
	I0731 10:24:21.615952    4520 main.go:141] libmachine: STDOUT: 
	I0731 10:24:21.615965    4520 main.go:141] libmachine: STDERR: 
	I0731 10:24:21.616016    4520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2 +20000M
	I0731 10:24:21.623764    4520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:21.623778    4520 main.go:141] libmachine: STDERR: 
	I0731 10:24:21.623790    4520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2
	I0731 10:24:21.623794    4520 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:21.623807    4520 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:21.623841    4520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:d0:53:ff:38:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/flannel-336000/disk.qcow2
	I0731 10:24:21.625512    4520 main.go:141] libmachine: STDOUT: 
	I0731 10:24:21.625534    4520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:21.625552    4520 client.go:171] duration metric: took 246.136958ms to LocalClient.Create
	I0731 10:24:23.627701    4520 start.go:128] duration metric: took 2.317913417s to createHost
	I0731 10:24:23.627746    4520 start.go:83] releasing machines lock for "flannel-336000", held for 2.318440708s
	W0731 10:24:23.628076    4520 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:23.639701    4520 out.go:177] 
	W0731 10:24:23.643779    4520 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:24:23.643836    4520 out.go:239] * 
	* 
	W0731 10:24:23.646540    4520 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:24:23.657668    4520 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.9123555s)

                                                
                                                
-- stdout --
	* [enable-default-cni-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-336000" primary control-plane node in "enable-default-cni-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:24:26.029382    4639 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:24:26.029525    4639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:26.029528    4639 out.go:304] Setting ErrFile to fd 2...
	I0731 10:24:26.029531    4639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:26.029666    4639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:24:26.030687    4639 out.go:298] Setting JSON to false
	I0731 10:24:26.046821    4639 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3230,"bootTime":1722443436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:24:26.046893    4639 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:24:26.052855    4639 out.go:177] * [enable-default-cni-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:24:26.061924    4639 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:24:26.061958    4639 notify.go:220] Checking for updates...
	I0731 10:24:26.066340    4639 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:24:26.069781    4639 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:24:26.072850    4639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:24:26.075854    4639 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:24:26.078837    4639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:24:26.082222    4639 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:26.082300    4639 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:26.082354    4639 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:24:26.086876    4639 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:24:26.093815    4639 start.go:297] selected driver: qemu2
	I0731 10:24:26.093822    4639 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:24:26.093829    4639 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:24:26.096105    4639 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:24:26.098870    4639 out.go:177] * Automatically selected the socket_vmnet network
	E0731 10:24:26.101967    4639 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0731 10:24:26.101984    4639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:24:26.102007    4639 cni.go:84] Creating CNI manager for "bridge"
	I0731 10:24:26.102010    4639 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:24:26.102036    4639 start.go:340] cluster config:
	{Name:enable-default-cni-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:24:26.105606    4639 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:24:26.112632    4639 out.go:177] * Starting "enable-default-cni-336000" primary control-plane node in "enable-default-cni-336000" cluster
	I0731 10:24:26.116863    4639 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:24:26.116881    4639 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:24:26.116893    4639 cache.go:56] Caching tarball of preloaded images
	I0731 10:24:26.116984    4639 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:24:26.117003    4639 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:24:26.117078    4639 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/enable-default-cni-336000/config.json ...
	I0731 10:24:26.117089    4639 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/enable-default-cni-336000/config.json: {Name:mkd7140183012c3a3ccd4b4c636701020613eb0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:24:26.117605    4639 start.go:360] acquireMachinesLock for enable-default-cni-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:26.117644    4639 start.go:364] duration metric: took 32.667µs to acquireMachinesLock for "enable-default-cni-336000"
	I0731 10:24:26.117656    4639 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:26.117698    4639 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:26.123830    4639 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:26.140531    4639 start.go:159] libmachine.API.Create for "enable-default-cni-336000" (driver="qemu2")
	I0731 10:24:26.140561    4639 client.go:168] LocalClient.Create starting
	I0731 10:24:26.140628    4639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:26.140657    4639 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:26.140665    4639 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:26.140700    4639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:26.140723    4639 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:26.140735    4639 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:26.141199    4639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:26.292501    4639 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:26.527661    4639 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:26.527675    4639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:26.528183    4639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0731 10:24:26.538082    4639 main.go:141] libmachine: STDOUT: 
	I0731 10:24:26.538107    4639 main.go:141] libmachine: STDERR: 
	I0731 10:24:26.538155    4639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2 +20000M
	I0731 10:24:26.546004    4639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:26.546020    4639 main.go:141] libmachine: STDERR: 
	I0731 10:24:26.546045    4639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0731 10:24:26.546048    4639 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:26.546061    4639 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:26.546092    4639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:05:d2:f7:33:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0731 10:24:26.547735    4639 main.go:141] libmachine: STDOUT: 
	I0731 10:24:26.547751    4639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:26.547771    4639 client.go:171] duration metric: took 407.212041ms to LocalClient.Create
	I0731 10:24:28.549995    4639 start.go:128] duration metric: took 2.432314042s to createHost
	I0731 10:24:28.550065    4639 start.go:83] releasing machines lock for "enable-default-cni-336000", held for 2.432449375s
	W0731 10:24:28.550121    4639 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:28.557333    4639 out.go:177] * Deleting "enable-default-cni-336000" in qemu2 ...
	W0731 10:24:28.585960    4639 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:28.585982    4639 start.go:729] Will try again in 5 seconds ...
	I0731 10:24:33.588098    4639 start.go:360] acquireMachinesLock for enable-default-cni-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:33.588631    4639 start.go:364] duration metric: took 385.458µs to acquireMachinesLock for "enable-default-cni-336000"
	I0731 10:24:33.588780    4639 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:33.589081    4639 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:33.605912    4639 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:33.656567    4639 start.go:159] libmachine.API.Create for "enable-default-cni-336000" (driver="qemu2")
	I0731 10:24:33.656614    4639 client.go:168] LocalClient.Create starting
	I0731 10:24:33.656744    4639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:33.656806    4639 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:33.656823    4639 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:33.656885    4639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:33.656928    4639 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:33.656938    4639 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:33.657546    4639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:33.818612    4639 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:33.849557    4639 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:33.849564    4639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:33.849742    4639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0731 10:24:33.858841    4639 main.go:141] libmachine: STDOUT: 
	I0731 10:24:33.858859    4639 main.go:141] libmachine: STDERR: 
	I0731 10:24:33.858914    4639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2 +20000M
	I0731 10:24:33.866603    4639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:33.866616    4639 main.go:141] libmachine: STDERR: 
	I0731 10:24:33.866642    4639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0731 10:24:33.866648    4639 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:33.866664    4639 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:33.866694    4639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:3b:b7:c2:19:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/enable-default-cni-336000/disk.qcow2
	I0731 10:24:33.868334    4639 main.go:141] libmachine: STDOUT: 
	I0731 10:24:33.868349    4639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:33.868361    4639 client.go:171] duration metric: took 211.745958ms to LocalClient.Create
	I0731 10:24:35.870509    4639 start.go:128] duration metric: took 2.281433833s to createHost
	I0731 10:24:35.870573    4639 start.go:83] releasing machines lock for "enable-default-cni-336000", held for 2.281953791s
	W0731 10:24:35.870921    4639 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:35.884625    4639 out.go:177] 
	W0731 10:24:35.889566    4639 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:24:35.889592    4639 out.go:239] * 
	* 
	W0731 10:24:35.892154    4639 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:24:35.900643    4639 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.858085542s)

                                                
                                                
-- stdout --
	* [bridge-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-336000" primary control-plane node in "bridge-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:24:38.094196    4751 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:24:38.094314    4751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:38.094317    4751 out.go:304] Setting ErrFile to fd 2...
	I0731 10:24:38.094319    4751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:38.094491    4751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:24:38.095549    4751 out.go:298] Setting JSON to false
	I0731 10:24:38.111705    4751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3242,"bootTime":1722443436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:24:38.111797    4751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:24:38.118037    4751 out.go:177] * [bridge-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:24:38.122117    4751 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:24:38.122168    4751 notify.go:220] Checking for updates...
	I0731 10:24:38.130077    4751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:24:38.134075    4751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:24:38.136998    4751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:24:38.141008    4751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:24:38.144028    4751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:24:38.147382    4751 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:38.147451    4751 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:38.147499    4751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:24:38.152022    4751 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:24:38.159060    4751 start.go:297] selected driver: qemu2
	I0731 10:24:38.159067    4751 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:24:38.159075    4751 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:24:38.161618    4751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:24:38.165078    4751 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:24:38.168159    4751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:24:38.168178    4751 cni.go:84] Creating CNI manager for "bridge"
	I0731 10:24:38.168199    4751 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:24:38.168233    4751 start.go:340] cluster config:
	{Name:bridge-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:24:38.172048    4751 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:24:38.176028    4751 out.go:177] * Starting "bridge-336000" primary control-plane node in "bridge-336000" cluster
	I0731 10:24:38.183045    4751 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:24:38.183062    4751 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:24:38.183076    4751 cache.go:56] Caching tarball of preloaded images
	I0731 10:24:38.183149    4751 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:24:38.183169    4751 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:24:38.183256    4751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/bridge-336000/config.json ...
	I0731 10:24:38.183270    4751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/bridge-336000/config.json: {Name:mk5d7b6a03ce0e642b13763baa7aa900081b04b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:24:38.183631    4751 start.go:360] acquireMachinesLock for bridge-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:38.183669    4751 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "bridge-336000"
	I0731 10:24:38.183679    4751 start.go:93] Provisioning new machine with config: &{Name:bridge-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:38.183722    4751 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:38.190838    4751 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:38.208197    4751 start.go:159] libmachine.API.Create for "bridge-336000" (driver="qemu2")
	I0731 10:24:38.208223    4751 client.go:168] LocalClient.Create starting
	I0731 10:24:38.208306    4751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:38.208340    4751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:38.208349    4751 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:38.208388    4751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:38.208412    4751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:38.208423    4751 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:38.208785    4751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:38.360537    4751 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:38.504238    4751 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:38.504245    4751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:38.504440    4751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2
	I0731 10:24:38.513916    4751 main.go:141] libmachine: STDOUT: 
	I0731 10:24:38.513931    4751 main.go:141] libmachine: STDERR: 
	I0731 10:24:38.514004    4751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2 +20000M
	I0731 10:24:38.521778    4751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:38.521790    4751 main.go:141] libmachine: STDERR: 
	I0731 10:24:38.521811    4751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2
	I0731 10:24:38.521816    4751 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:38.521827    4751 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:38.521853    4751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:6b:3a:fe:34:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2
	I0731 10:24:38.523503    4751 main.go:141] libmachine: STDOUT: 
	I0731 10:24:38.523520    4751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:38.523539    4751 client.go:171] duration metric: took 315.315958ms to LocalClient.Create
	I0731 10:24:40.525701    4751 start.go:128] duration metric: took 2.341995s to createHost
	I0731 10:24:40.525769    4751 start.go:83] releasing machines lock for "bridge-336000", held for 2.342128208s
	W0731 10:24:40.525825    4751 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:40.535802    4751 out.go:177] * Deleting "bridge-336000" in qemu2 ...
	W0731 10:24:40.570461    4751 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:40.570493    4751 start.go:729] Will try again in 5 seconds ...
	I0731 10:24:45.572692    4751 start.go:360] acquireMachinesLock for bridge-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:45.573120    4751 start.go:364] duration metric: took 342.541µs to acquireMachinesLock for "bridge-336000"
	I0731 10:24:45.573242    4751 start.go:93] Provisioning new machine with config: &{Name:bridge-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:45.573529    4751 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:45.590383    4751 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:45.641747    4751 start.go:159] libmachine.API.Create for "bridge-336000" (driver="qemu2")
	I0731 10:24:45.641788    4751 client.go:168] LocalClient.Create starting
	I0731 10:24:45.641889    4751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:45.641947    4751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:45.641961    4751 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:45.642025    4751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:45.642069    4751 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:45.642092    4751 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:45.642598    4751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:45.804727    4751 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:45.856622    4751 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:45.856627    4751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:45.856796    4751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2
	I0731 10:24:45.865960    4751 main.go:141] libmachine: STDOUT: 
	I0731 10:24:45.865977    4751 main.go:141] libmachine: STDERR: 
	I0731 10:24:45.866028    4751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2 +20000M
	I0731 10:24:45.873830    4751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:45.873844    4751 main.go:141] libmachine: STDERR: 
	I0731 10:24:45.873857    4751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2
	I0731 10:24:45.873862    4751 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:45.873873    4751 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:45.873906    4751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:84:2e:85:28:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/bridge-336000/disk.qcow2
	I0731 10:24:45.875556    4751 main.go:141] libmachine: STDOUT: 
	I0731 10:24:45.875572    4751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:45.875584    4751 client.go:171] duration metric: took 233.795042ms to LocalClient.Create
	I0731 10:24:47.877726    4751 start.go:128] duration metric: took 2.304181s to createHost
	I0731 10:24:47.877792    4751 start.go:83] releasing machines lock for "bridge-336000", held for 2.304686416s
	W0731 10:24:47.878125    4751 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:47.888694    4751 out.go:177] 
	W0731 10:24:47.895748    4751 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:24:47.895774    4751 out.go:239] * 
	* 
	W0731 10:24:47.898363    4751 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:24:47.907669    4751 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.862330375s)

                                                
                                                
-- stdout --
	* [kubenet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-336000" primary control-plane node in "kubenet-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:24:50.101243    4863 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:24:50.101370    4863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:50.101373    4863 out.go:304] Setting ErrFile to fd 2...
	I0731 10:24:50.101376    4863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:24:50.101521    4863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:24:50.102626    4863 out.go:298] Setting JSON to false
	I0731 10:24:50.118760    4863 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3254,"bootTime":1722443436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:24:50.118829    4863 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:24:50.125323    4863 out.go:177] * [kubenet-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:24:50.133268    4863 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:24:50.133328    4863 notify.go:220] Checking for updates...
	I0731 10:24:50.139209    4863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:24:50.142241    4863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:24:50.145311    4863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:24:50.148261    4863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:24:50.151239    4863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:24:50.154579    4863 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:50.154652    4863 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:24:50.154706    4863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:24:50.159145    4863 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:24:50.166287    4863 start.go:297] selected driver: qemu2
	I0731 10:24:50.166295    4863 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:24:50.166304    4863 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:24:50.168535    4863 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:24:50.171170    4863 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:24:50.174340    4863 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:24:50.174356    4863 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0731 10:24:50.174379    4863 start.go:340] cluster config:
	{Name:kubenet-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:24:50.177841    4863 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:24:50.185217    4863 out.go:177] * Starting "kubenet-336000" primary control-plane node in "kubenet-336000" cluster
	I0731 10:24:50.189188    4863 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:24:50.189200    4863 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:24:50.189209    4863 cache.go:56] Caching tarball of preloaded images
	I0731 10:24:50.189266    4863 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:24:50.189271    4863 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:24:50.189322    4863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/kubenet-336000/config.json ...
	I0731 10:24:50.189333    4863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/kubenet-336000/config.json: {Name:mk3410bf0f897089f2b7170f2d5394935659bd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:24:50.189537    4863 start.go:360] acquireMachinesLock for kubenet-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:50.189571    4863 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "kubenet-336000"
	I0731 10:24:50.189582    4863 start.go:93] Provisioning new machine with config: &{Name:kubenet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:50.189609    4863 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:50.197264    4863 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:50.214550    4863 start.go:159] libmachine.API.Create for "kubenet-336000" (driver="qemu2")
	I0731 10:24:50.214574    4863 client.go:168] LocalClient.Create starting
	I0731 10:24:50.214636    4863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:50.214668    4863 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:50.214676    4863 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:50.214713    4863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:50.214736    4863 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:50.214746    4863 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:50.215121    4863 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:50.370089    4863 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:50.492341    4863 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:50.492346    4863 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:50.492545    4863 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2
	I0731 10:24:50.501687    4863 main.go:141] libmachine: STDOUT: 
	I0731 10:24:50.501703    4863 main.go:141] libmachine: STDERR: 
	I0731 10:24:50.501762    4863 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2 +20000M
	I0731 10:24:50.509507    4863 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:50.509522    4863 main.go:141] libmachine: STDERR: 
	I0731 10:24:50.509534    4863 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2
	I0731 10:24:50.509539    4863 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:50.509552    4863 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:50.509580    4863 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:29:a7:81:f6:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2
	I0731 10:24:50.511192    4863 main.go:141] libmachine: STDOUT: 
	I0731 10:24:50.511209    4863 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:50.511225    4863 client.go:171] duration metric: took 296.653125ms to LocalClient.Create
	I0731 10:24:52.513370    4863 start.go:128] duration metric: took 2.323778583s to createHost
	I0731 10:24:52.513445    4863 start.go:83] releasing machines lock for "kubenet-336000", held for 2.32390225s
	W0731 10:24:52.513556    4863 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:52.524454    4863 out.go:177] * Deleting "kubenet-336000" in qemu2 ...
	W0731 10:24:52.557512    4863 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:52.557538    4863 start.go:729] Will try again in 5 seconds ...
	I0731 10:24:57.558950    4863 start.go:360] acquireMachinesLock for kubenet-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:24:57.559371    4863 start.go:364] duration metric: took 355.083µs to acquireMachinesLock for "kubenet-336000"
	I0731 10:24:57.559471    4863 start.go:93] Provisioning new machine with config: &{Name:kubenet-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:24:57.559776    4863 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:24:57.576545    4863 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:24:57.625897    4863 start.go:159] libmachine.API.Create for "kubenet-336000" (driver="qemu2")
	I0731 10:24:57.625944    4863 client.go:168] LocalClient.Create starting
	I0731 10:24:57.626059    4863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:24:57.626122    4863 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:57.626142    4863 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:57.626218    4863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:24:57.626262    4863 main.go:141] libmachine: Decoding PEM data...
	I0731 10:24:57.626282    4863 main.go:141] libmachine: Parsing certificate...
	I0731 10:24:57.626785    4863 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:24:57.788877    4863 main.go:141] libmachine: Creating SSH key...
	I0731 10:24:57.872016    4863 main.go:141] libmachine: Creating Disk image...
	I0731 10:24:57.872022    4863 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:24:57.872203    4863 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2
	I0731 10:24:57.881612    4863 main.go:141] libmachine: STDOUT: 
	I0731 10:24:57.881628    4863 main.go:141] libmachine: STDERR: 
	I0731 10:24:57.881687    4863 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2 +20000M
	I0731 10:24:57.889491    4863 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:24:57.889506    4863 main.go:141] libmachine: STDERR: 
	I0731 10:24:57.889517    4863 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2
	I0731 10:24:57.889522    4863 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:24:57.889533    4863 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:24:57.889558    4863 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:9d:21:e2:38:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/kubenet-336000/disk.qcow2
	I0731 10:24:57.891225    4863 main.go:141] libmachine: STDOUT: 
	I0731 10:24:57.891246    4863 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:24:57.891266    4863 client.go:171] duration metric: took 265.322083ms to LocalClient.Create
	I0731 10:24:59.893476    4863 start.go:128] duration metric: took 2.333679208s to createHost
	I0731 10:24:59.893536    4863 start.go:83] releasing machines lock for "kubenet-336000", held for 2.334180041s
	W0731 10:24:59.893846    4863 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:24:59.907436    4863 out.go:177] 
	W0731 10:24:59.912490    4863 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:24:59.912517    4863 out.go:239] * 
	* 
	W0731 10:24:59.915111    4863 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:24:59.922395    4863 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.914122708s)

                                                
                                                
-- stdout --
	* [custom-flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-336000" primary control-plane node in "custom-flannel-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:25:02.109074    4972 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:25:02.109203    4972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:02.109206    4972 out.go:304] Setting ErrFile to fd 2...
	I0731 10:25:02.109209    4972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:02.109345    4972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:25:02.110419    4972 out.go:298] Setting JSON to false
	I0731 10:25:02.126324    4972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3266,"bootTime":1722443436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:25:02.126400    4972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:25:02.132087    4972 out.go:177] * [custom-flannel-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:25:02.140061    4972 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:25:02.140088    4972 notify.go:220] Checking for updates...
	I0731 10:25:02.144413    4972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:25:02.146999    4972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:25:02.150025    4972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:25:02.153038    4972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:25:02.156000    4972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:25:02.159396    4972 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:02.159469    4972 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:02.159515    4972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:25:02.163943    4972 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:25:02.171074    4972 start.go:297] selected driver: qemu2
	I0731 10:25:02.171082    4972 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:25:02.171090    4972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:25:02.173305    4972 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:25:02.175989    4972 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:25:02.179004    4972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:25:02.179017    4972 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 10:25:02.179027    4972 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0731 10:25:02.179053    4972 start.go:340] cluster config:
	{Name:custom-flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:25:02.182500    4972 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:02.189847    4972 out.go:177] * Starting "custom-flannel-336000" primary control-plane node in "custom-flannel-336000" cluster
	I0731 10:25:02.194005    4972 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:25:02.194021    4972 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:25:02.194037    4972 cache.go:56] Caching tarball of preloaded images
	I0731 10:25:02.194093    4972 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:25:02.194099    4972 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:25:02.194164    4972 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/custom-flannel-336000/config.json ...
	I0731 10:25:02.194175    4972 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/custom-flannel-336000/config.json: {Name:mkc78513b57dbf1862538aed6124ef39398780b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:25:02.194384    4972 start.go:360] acquireMachinesLock for custom-flannel-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:02.194424    4972 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "custom-flannel-336000"
	I0731 10:25:02.194435    4972 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:02.194464    4972 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:02.199933    4972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:25:02.217287    4972 start.go:159] libmachine.API.Create for "custom-flannel-336000" (driver="qemu2")
	I0731 10:25:02.217315    4972 client.go:168] LocalClient.Create starting
	I0731 10:25:02.217369    4972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:02.217401    4972 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:02.217409    4972 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:02.217444    4972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:02.217466    4972 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:02.217474    4972 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:02.217833    4972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:02.370612    4972 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:02.580975    4972 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:02.580983    4972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:02.581217    4972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0731 10:25:02.591151    4972 main.go:141] libmachine: STDOUT: 
	I0731 10:25:02.591176    4972 main.go:141] libmachine: STDERR: 
	I0731 10:25:02.591225    4972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2 +20000M
	I0731 10:25:02.599090    4972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:02.599106    4972 main.go:141] libmachine: STDERR: 
	I0731 10:25:02.599123    4972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0731 10:25:02.599129    4972 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:02.599141    4972 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:02.599166    4972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:18:88:73:a6:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0731 10:25:02.600753    4972 main.go:141] libmachine: STDOUT: 
	I0731 10:25:02.600769    4972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:02.600786    4972 client.go:171] duration metric: took 383.473667ms to LocalClient.Create
	I0731 10:25:04.602952    4972 start.go:128] duration metric: took 2.408502125s to createHost
	I0731 10:25:04.603106    4972 start.go:83] releasing machines lock for "custom-flannel-336000", held for 2.408708792s
	W0731 10:25:04.603168    4972 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:04.618404    4972 out.go:177] * Deleting "custom-flannel-336000" in qemu2 ...
	W0731 10:25:04.646034    4972 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:04.646056    4972 start.go:729] Will try again in 5 seconds ...
	I0731 10:25:09.648159    4972 start.go:360] acquireMachinesLock for custom-flannel-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:09.648657    4972 start.go:364] duration metric: took 366.708µs to acquireMachinesLock for "custom-flannel-336000"
	I0731 10:25:09.648848    4972 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:09.649045    4972 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:09.665858    4972 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:25:09.714003    4972 start.go:159] libmachine.API.Create for "custom-flannel-336000" (driver="qemu2")
	I0731 10:25:09.714059    4972 client.go:168] LocalClient.Create starting
	I0731 10:25:09.714170    4972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:09.714224    4972 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:09.714242    4972 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:09.714303    4972 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:09.714347    4972 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:09.714361    4972 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:09.714820    4972 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:09.888996    4972 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:09.932366    4972 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:09.932374    4972 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:09.932564    4972 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0731 10:25:09.941648    4972 main.go:141] libmachine: STDOUT: 
	I0731 10:25:09.941665    4972 main.go:141] libmachine: STDERR: 
	I0731 10:25:09.941711    4972 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2 +20000M
	I0731 10:25:09.949509    4972 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:09.949523    4972 main.go:141] libmachine: STDERR: 
	I0731 10:25:09.949533    4972 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0731 10:25:09.949537    4972 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:09.949550    4972 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:09.949609    4972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:86:2d:e5:6f:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/custom-flannel-336000/disk.qcow2
	I0731 10:25:09.951237    4972 main.go:141] libmachine: STDOUT: 
	I0731 10:25:09.951254    4972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:09.951266    4972 client.go:171] duration metric: took 237.206208ms to LocalClient.Create
	I0731 10:25:11.953407    4972 start.go:128] duration metric: took 2.304375583s to createHost
	I0731 10:25:11.953479    4972 start.go:83] releasing machines lock for "custom-flannel-336000", held for 2.304790334s
	W0731 10:25:11.953885    4972 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:11.962312    4972 out.go:177] 
	W0731 10:25:11.969197    4972 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:25:11.969251    4972 out.go:239] * 
	* 
	W0731 10:25:11.971777    4972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:25:11.981265    4972 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.8444085s)

                                                
                                                
-- stdout --
	* [calico-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-336000" primary control-plane node in "calico-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:25:14.348747    5091 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:25:14.348958    5091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:14.348961    5091 out.go:304] Setting ErrFile to fd 2...
	I0731 10:25:14.348964    5091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:14.349097    5091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:25:14.350150    5091 out.go:298] Setting JSON to false
	I0731 10:25:14.366347    5091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3278,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:25:14.366462    5091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:25:14.372926    5091 out.go:177] * [calico-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:25:14.380892    5091 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:25:14.380970    5091 notify.go:220] Checking for updates...
	I0731 10:25:14.386947    5091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:25:14.388452    5091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:25:14.391944    5091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:25:14.394959    5091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:25:14.397975    5091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:25:14.401368    5091 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:14.401442    5091 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:14.401497    5091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:25:14.405975    5091 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:25:14.412854    5091 start.go:297] selected driver: qemu2
	I0731 10:25:14.412861    5091 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:25:14.412867    5091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:25:14.415146    5091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:25:14.417949    5091 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:25:14.421054    5091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:25:14.421080    5091 cni.go:84] Creating CNI manager for "calico"
	I0731 10:25:14.421085    5091 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0731 10:25:14.421125    5091 start.go:340] cluster config:
	{Name:calico-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:25:14.424817    5091 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:14.431898    5091 out.go:177] * Starting "calico-336000" primary control-plane node in "calico-336000" cluster
	I0731 10:25:14.434920    5091 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:25:14.434935    5091 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:25:14.434949    5091 cache.go:56] Caching tarball of preloaded images
	I0731 10:25:14.435023    5091 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:25:14.435029    5091 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:25:14.435083    5091 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/calico-336000/config.json ...
	I0731 10:25:14.435093    5091 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/calico-336000/config.json: {Name:mkdc169a610d9f31dbac1bce5b4bbb6ac8b34bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:25:14.435309    5091 start.go:360] acquireMachinesLock for calico-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:14.435345    5091 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "calico-336000"
	I0731 10:25:14.435356    5091 start.go:93] Provisioning new machine with config: &{Name:calico-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:14.435397    5091 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:14.442807    5091 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:25:14.460319    5091 start.go:159] libmachine.API.Create for "calico-336000" (driver="qemu2")
	I0731 10:25:14.460351    5091 client.go:168] LocalClient.Create starting
	I0731 10:25:14.460408    5091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:14.460442    5091 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:14.460455    5091 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:14.460495    5091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:14.460519    5091 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:14.460529    5091 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:14.460927    5091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:14.613350    5091 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:14.743740    5091 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:14.743751    5091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:14.743975    5091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2
	I0731 10:25:14.753404    5091 main.go:141] libmachine: STDOUT: 
	I0731 10:25:14.753424    5091 main.go:141] libmachine: STDERR: 
	I0731 10:25:14.753479    5091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2 +20000M
	I0731 10:25:14.761478    5091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:14.761496    5091 main.go:141] libmachine: STDERR: 
	I0731 10:25:14.761521    5091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2
	I0731 10:25:14.761526    5091 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:14.761541    5091 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:14.761565    5091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b6:6f:08:61:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2
	I0731 10:25:14.763199    5091 main.go:141] libmachine: STDOUT: 
	I0731 10:25:14.763337    5091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:14.763353    5091 client.go:171] duration metric: took 303.003375ms to LocalClient.Create
	I0731 10:25:16.765490    5091 start.go:128] duration metric: took 2.330105791s to createHost
	I0731 10:25:16.765550    5091 start.go:83] releasing machines lock for "calico-336000", held for 2.330233083s
	W0731 10:25:16.765657    5091 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:16.776671    5091 out.go:177] * Deleting "calico-336000" in qemu2 ...
	W0731 10:25:16.809992    5091 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:16.810022    5091 start.go:729] Will try again in 5 seconds ...
	I0731 10:25:21.812107    5091 start.go:360] acquireMachinesLock for calico-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:21.812583    5091 start.go:364] duration metric: took 380.333µs to acquireMachinesLock for "calico-336000"
	I0731 10:25:21.812686    5091 start.go:93] Provisioning new machine with config: &{Name:calico-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:21.813018    5091 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:21.830704    5091 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:25:21.884194    5091 start.go:159] libmachine.API.Create for "calico-336000" (driver="qemu2")
	I0731 10:25:21.884244    5091 client.go:168] LocalClient.Create starting
	I0731 10:25:21.884361    5091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:21.884420    5091 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:21.884435    5091 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:21.884496    5091 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:21.884550    5091 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:21.884560    5091 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:21.885067    5091 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:22.047648    5091 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:22.104613    5091 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:22.104618    5091 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:22.104797    5091 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2
	I0731 10:25:22.114072    5091 main.go:141] libmachine: STDOUT: 
	I0731 10:25:22.114091    5091 main.go:141] libmachine: STDERR: 
	I0731 10:25:22.114138    5091 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2 +20000M
	I0731 10:25:22.121863    5091 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:22.121878    5091 main.go:141] libmachine: STDERR: 
	I0731 10:25:22.121888    5091 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2
	I0731 10:25:22.121892    5091 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:22.121904    5091 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:22.121941    5091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:3d:c9:c5:56:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/calico-336000/disk.qcow2
	I0731 10:25:22.123564    5091 main.go:141] libmachine: STDOUT: 
	I0731 10:25:22.123627    5091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:22.123642    5091 client.go:171] duration metric: took 239.397583ms to LocalClient.Create
	I0731 10:25:24.125783    5091 start.go:128] duration metric: took 2.312739417s to createHost
	I0731 10:25:24.125837    5091 start.go:83] releasing machines lock for "calico-336000", held for 2.313268792s
	W0731 10:25:24.126229    5091 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:24.138923    5091 out.go:177] 
	W0731 10:25:24.143056    5091 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:25:24.143083    5091 out.go:239] * 
	* 
	W0731 10:25:24.145839    5091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:25:24.152790    5091 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-336000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.962671834s)

                                                
                                                
-- stdout --
	* [false-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-336000" primary control-plane node in "false-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:25:26.536414    5208 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:25:26.536535    5208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:26.536538    5208 out.go:304] Setting ErrFile to fd 2...
	I0731 10:25:26.536541    5208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:26.536673    5208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:25:26.537735    5208 out.go:298] Setting JSON to false
	I0731 10:25:26.553748    5208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3290,"bootTime":1722443436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:25:26.553823    5208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:25:26.559700    5208 out.go:177] * [false-336000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:25:26.567697    5208 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:25:26.567727    5208 notify.go:220] Checking for updates...
	I0731 10:25:26.573674    5208 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:25:26.576655    5208 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:25:26.578091    5208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:25:26.580656    5208 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:25:26.583638    5208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:25:26.587086    5208 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:26.587165    5208 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:26.587215    5208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:25:26.591587    5208 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:25:26.598653    5208 start.go:297] selected driver: qemu2
	I0731 10:25:26.598661    5208 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:25:26.598669    5208 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:25:26.600928    5208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:25:26.603659    5208 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:25:26.606748    5208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:25:26.606778    5208 cni.go:84] Creating CNI manager for "false"
	I0731 10:25:26.606812    5208 start.go:340] cluster config:
	{Name:false-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:25:26.610477    5208 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:26.617644    5208 out.go:177] * Starting "false-336000" primary control-plane node in "false-336000" cluster
	I0731 10:25:26.621682    5208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:25:26.621699    5208 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:25:26.621709    5208 cache.go:56] Caching tarball of preloaded images
	I0731 10:25:26.621782    5208 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:25:26.621789    5208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:25:26.621858    5208 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/false-336000/config.json ...
	I0731 10:25:26.621874    5208 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/false-336000/config.json: {Name:mk47ad14c573c4cb6592aeba4595f415c7d69922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:25:26.622218    5208 start.go:360] acquireMachinesLock for false-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:26.622252    5208 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "false-336000"
	I0731 10:25:26.622262    5208 start.go:93] Provisioning new machine with config: &{Name:false-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:26.622295    5208 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:26.630665    5208 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:25:26.647651    5208 start.go:159] libmachine.API.Create for "false-336000" (driver="qemu2")
	I0731 10:25:26.647678    5208 client.go:168] LocalClient.Create starting
	I0731 10:25:26.647735    5208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:26.647765    5208 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:26.647773    5208 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:26.647810    5208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:26.647832    5208 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:26.647842    5208 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:26.648309    5208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:26.800494    5208 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:26.958954    5208 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:26.958964    5208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:26.959169    5208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2
	I0731 10:25:26.968801    5208 main.go:141] libmachine: STDOUT: 
	I0731 10:25:26.968819    5208 main.go:141] libmachine: STDERR: 
	I0731 10:25:26.968863    5208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2 +20000M
	I0731 10:25:26.976662    5208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:26.976689    5208 main.go:141] libmachine: STDERR: 
	I0731 10:25:26.976718    5208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2
	I0731 10:25:26.976723    5208 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:26.976735    5208 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:26.976765    5208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:9c:f3:90:14:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2
	I0731 10:25:26.978488    5208 main.go:141] libmachine: STDOUT: 
	I0731 10:25:26.978504    5208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:26.978524    5208 client.go:171] duration metric: took 330.847208ms to LocalClient.Create
	I0731 10:25:28.980672    5208 start.go:128] duration metric: took 2.358397083s to createHost
	I0731 10:25:28.980735    5208 start.go:83] releasing machines lock for "false-336000", held for 2.358512417s
	W0731 10:25:28.980795    5208 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:28.996222    5208 out.go:177] * Deleting "false-336000" in qemu2 ...
	W0731 10:25:29.025888    5208 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:29.025914    5208 start.go:729] Will try again in 5 seconds ...
	I0731 10:25:34.028024    5208 start.go:360] acquireMachinesLock for false-336000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:34.028570    5208 start.go:364] duration metric: took 430.875µs to acquireMachinesLock for "false-336000"
	I0731 10:25:34.028722    5208 start.go:93] Provisioning new machine with config: &{Name:false-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:34.028991    5208 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:34.040767    5208 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 10:25:34.089459    5208 start.go:159] libmachine.API.Create for "false-336000" (driver="qemu2")
	I0731 10:25:34.089514    5208 client.go:168] LocalClient.Create starting
	I0731 10:25:34.089637    5208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:34.089705    5208 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:34.089723    5208 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:34.089791    5208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:34.089836    5208 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:34.089847    5208 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:34.090426    5208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:34.251272    5208 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:34.395055    5208 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:34.395061    5208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:34.395259    5208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2
	I0731 10:25:34.404789    5208 main.go:141] libmachine: STDOUT: 
	I0731 10:25:34.404804    5208 main.go:141] libmachine: STDERR: 
	I0731 10:25:34.404865    5208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2 +20000M
	I0731 10:25:34.412747    5208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:34.412761    5208 main.go:141] libmachine: STDERR: 
	I0731 10:25:34.412773    5208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2
	I0731 10:25:34.412776    5208 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:34.412788    5208 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:34.412817    5208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d8:de:00:38:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/false-336000/disk.qcow2
	I0731 10:25:34.414500    5208 main.go:141] libmachine: STDOUT: 
	I0731 10:25:34.414513    5208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:34.414524    5208 client.go:171] duration metric: took 325.009208ms to LocalClient.Create
	I0731 10:25:36.416662    5208 start.go:128] duration metric: took 2.387665417s to createHost
	I0731 10:25:36.416752    5208 start.go:83] releasing machines lock for "false-336000", held for 2.388175917s
	W0731 10:25:36.417108    5208 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:36.434920    5208 out.go:177] 
	W0731 10:25:36.437835    5208 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:25:36.437895    5208 out.go:239] * 
	* 
	W0731 10:25:36.440214    5208 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:25:36.455210    5208 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.015664791s)

                                                
                                                
-- stdout --
	* [old-k8s-version-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:25:38.610609    5323 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:25:38.610742    5323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:38.610745    5323 out.go:304] Setting ErrFile to fd 2...
	I0731 10:25:38.610747    5323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:38.610858    5323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:25:38.611900    5323 out.go:298] Setting JSON to false
	I0731 10:25:38.627895    5323 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3302,"bootTime":1722443436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:25:38.627967    5323 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:25:38.634791    5323 out.go:177] * [old-k8s-version-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:25:38.641689    5323 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:25:38.641737    5323 notify.go:220] Checking for updates...
	I0731 10:25:38.647619    5323 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:25:38.650662    5323 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:25:38.657692    5323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:25:38.660610    5323 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:25:38.663652    5323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:25:38.666951    5323 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:38.667025    5323 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:38.667092    5323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:25:38.670657    5323 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:25:38.677665    5323 start.go:297] selected driver: qemu2
	I0731 10:25:38.677671    5323 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:25:38.677676    5323 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:25:38.680173    5323 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:25:38.681678    5323 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:25:38.684793    5323 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:25:38.684810    5323 cni.go:84] Creating CNI manager for ""
	I0731 10:25:38.684818    5323 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 10:25:38.684842    5323 start.go:340] cluster config:
	{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:25:38.688678    5323 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:38.696588    5323 out.go:177] * Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	I0731 10:25:38.700688    5323 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 10:25:38.700705    5323 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 10:25:38.700722    5323 cache.go:56] Caching tarball of preloaded images
	I0731 10:25:38.700797    5323 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:25:38.700804    5323 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 10:25:38.700859    5323 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/old-k8s-version-537000/config.json ...
	I0731 10:25:38.700879    5323 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/old-k8s-version-537000/config.json: {Name:mke93a81c1941cf6c34d67ad603b648f0350e2fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:25:38.701231    5323 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:38.701269    5323 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "old-k8s-version-537000"
	I0731 10:25:38.701280    5323 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:38.701326    5323 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:38.709625    5323 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:25:38.727612    5323 start.go:159] libmachine.API.Create for "old-k8s-version-537000" (driver="qemu2")
	I0731 10:25:38.727640    5323 client.go:168] LocalClient.Create starting
	I0731 10:25:38.727713    5323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:38.727748    5323 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:38.727755    5323 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:38.727800    5323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:38.727821    5323 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:38.727827    5323 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:38.728157    5323 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:38.879904    5323 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:38.991711    5323 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:38.991717    5323 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:38.991907    5323 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:39.001190    5323 main.go:141] libmachine: STDOUT: 
	I0731 10:25:39.001207    5323 main.go:141] libmachine: STDERR: 
	I0731 10:25:39.001246    5323 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2 +20000M
	I0731 10:25:39.008998    5323 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:39.009020    5323 main.go:141] libmachine: STDERR: 
	I0731 10:25:39.009040    5323 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:39.009045    5323 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:39.009054    5323 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:39.009083    5323 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:63:81:b6:e1:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:39.010749    5323 main.go:141] libmachine: STDOUT: 
	I0731 10:25:39.010762    5323 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:39.010779    5323 client.go:171] duration metric: took 283.136959ms to LocalClient.Create
	I0731 10:25:41.012918    5323 start.go:128] duration metric: took 2.311608125s to createHost
	I0731 10:25:41.012995    5323 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 2.311753209s
	W0731 10:25:41.013077    5323 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:41.024121    5323 out.go:177] * Deleting "old-k8s-version-537000" in qemu2 ...
	W0731 10:25:41.058531    5323 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:41.058558    5323 start.go:729] Will try again in 5 seconds ...
	I0731 10:25:46.060719    5323 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:46.061194    5323 start.go:364] duration metric: took 386.333µs to acquireMachinesLock for "old-k8s-version-537000"
	I0731 10:25:46.061325    5323 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:46.061663    5323 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:46.067193    5323 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:25:46.116398    5323 start.go:159] libmachine.API.Create for "old-k8s-version-537000" (driver="qemu2")
	I0731 10:25:46.116447    5323 client.go:168] LocalClient.Create starting
	I0731 10:25:46.116558    5323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:46.116623    5323 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:46.116638    5323 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:46.116700    5323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:46.116751    5323 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:46.116764    5323 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:46.117873    5323 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:46.288097    5323 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:46.534389    5323 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:46.534401    5323 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:46.534605    5323 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:46.543975    5323 main.go:141] libmachine: STDOUT: 
	I0731 10:25:46.543995    5323 main.go:141] libmachine: STDERR: 
	I0731 10:25:46.544037    5323 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2 +20000M
	I0731 10:25:46.551943    5323 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:46.551973    5323 main.go:141] libmachine: STDERR: 
	I0731 10:25:46.551986    5323 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:46.551991    5323 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:46.551999    5323 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:46.552034    5323 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:dc:a6:1e:b5:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:46.553629    5323 main.go:141] libmachine: STDOUT: 
	I0731 10:25:46.553642    5323 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:46.553655    5323 client.go:171] duration metric: took 437.211542ms to LocalClient.Create
	I0731 10:25:48.555798    5323 start.go:128] duration metric: took 2.494142542s to createHost
	I0731 10:25:48.555875    5323 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 2.49468825s
	W0731 10:25:48.556349    5323 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:48.566025    5323 out.go:177] 
	W0731 10:25:48.572040    5323 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:25:48.572093    5323 out.go:239] * 
	* 
	W0731 10:25:48.574821    5323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:25:48.584901    5323 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (66.061458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-537000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-537000 create -f testdata/busybox.yaml: exit status 1 (29.506ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-537000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-537000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (29.897917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (29.3235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-537000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-537000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-537000 describe deploy/metrics-server -n kube-system: exit status 1 (26.669292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-537000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-537000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (30.166125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.184070875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:25:52.404811    5376 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:25:52.404941    5376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:52.404944    5376 out.go:304] Setting ErrFile to fd 2...
	I0731 10:25:52.404947    5376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:52.405076    5376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:25:52.406088    5376 out.go:298] Setting JSON to false
	I0731 10:25:52.422074    5376 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3316,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:25:52.422149    5376 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:25:52.426547    5376 out.go:177] * [old-k8s-version-537000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:25:52.433454    5376 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:25:52.433538    5376 notify.go:220] Checking for updates...
	I0731 10:25:52.440533    5376 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:25:52.442040    5376 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:25:52.445551    5376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:25:52.448579    5376 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:25:52.451576    5376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:25:52.454802    5376 config.go:182] Loaded profile config "old-k8s-version-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 10:25:52.458511    5376 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 10:25:52.461586    5376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:25:52.465540    5376 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:25:52.472567    5376 start.go:297] selected driver: qemu2
	I0731 10:25:52.472574    5376 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:25:52.472642    5376 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:25:52.475062    5376 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:25:52.475087    5376 cni.go:84] Creating CNI manager for ""
	I0731 10:25:52.475094    5376 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 10:25:52.475120    5376 start.go:340] cluster config:
	{Name:old-k8s-version-537000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-537000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:25:52.478760    5376 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:52.486551    5376 out.go:177] * Starting "old-k8s-version-537000" primary control-plane node in "old-k8s-version-537000" cluster
	I0731 10:25:52.490544    5376 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 10:25:52.490560    5376 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 10:25:52.490571    5376 cache.go:56] Caching tarball of preloaded images
	I0731 10:25:52.490636    5376 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:25:52.490642    5376 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 10:25:52.490700    5376 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/old-k8s-version-537000/config.json ...
	I0731 10:25:52.491179    5376 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:52.491209    5376 start.go:364] duration metric: took 22µs to acquireMachinesLock for "old-k8s-version-537000"
	I0731 10:25:52.491219    5376 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:25:52.491225    5376 fix.go:54] fixHost starting: 
	I0731 10:25:52.491331    5376 fix.go:112] recreateIfNeeded on old-k8s-version-537000: state=Stopped err=<nil>
	W0731 10:25:52.491339    5376 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:25:52.494630    5376 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	I0731 10:25:52.502521    5376 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:52.502571    5376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:dc:a6:1e:b5:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:52.504568    5376 main.go:141] libmachine: STDOUT: 
	I0731 10:25:52.504587    5376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:52.504616    5376 fix.go:56] duration metric: took 13.392166ms for fixHost
	I0731 10:25:52.504620    5376 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 13.404959ms
	W0731 10:25:52.504627    5376 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:25:52.504666    5376 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:52.504671    5376 start.go:729] Will try again in 5 seconds ...
	I0731 10:25:57.506781    5376 start.go:360] acquireMachinesLock for old-k8s-version-537000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:57.507178    5376 start.go:364] duration metric: took 286.75µs to acquireMachinesLock for "old-k8s-version-537000"
	I0731 10:25:57.507266    5376 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:25:57.507286    5376 fix.go:54] fixHost starting: 
	I0731 10:25:57.508018    5376 fix.go:112] recreateIfNeeded on old-k8s-version-537000: state=Stopped err=<nil>
	W0731 10:25:57.508043    5376 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:25:57.513484    5376 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-537000" ...
	I0731 10:25:57.517458    5376 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:57.517706    5376 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:dc:a6:1e:b5:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/old-k8s-version-537000/disk.qcow2
	I0731 10:25:57.526676    5376 main.go:141] libmachine: STDOUT: 
	I0731 10:25:57.526739    5376 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:57.526820    5376 fix.go:56] duration metric: took 19.539125ms for fixHost
	I0731 10:25:57.526833    5376 start.go:83] releasing machines lock for "old-k8s-version-537000", held for 19.631542ms
	W0731 10:25:57.526984    5376 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-537000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:25:57.534444    5376 out.go:177] 
	W0731 10:25:57.538539    5376 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:25:57.538571    5376 out.go:239] * 
	* 
	W0731 10:25:57.541050    5376 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:25:57.548444    5376 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-537000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (67.878959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-537000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (31.861125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-537000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-537000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-537000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.903958ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-537000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-537000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (29.795792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-537000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (29.133833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-537000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-537000 --alsologtostderr -v=1: exit status 83 (41.322833ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-537000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-537000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:25:57.818026    5395 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:25:57.818415    5395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:57.818419    5395 out.go:304] Setting ErrFile to fd 2...
	I0731 10:25:57.818421    5395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:57.818584    5395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:25:57.818777    5395 out.go:298] Setting JSON to false
	I0731 10:25:57.818783    5395 mustload.go:65] Loading cluster: old-k8s-version-537000
	I0731 10:25:57.818963    5395 config.go:182] Loaded profile config "old-k8s-version-537000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 10:25:57.823218    5395 out.go:177] * The control-plane node old-k8s-version-537000 host is not running: state=Stopped
	I0731 10:25:57.826335    5395 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-537000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-537000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (29.082709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (29.074334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-364000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-364000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (10.084079625s)

                                                
                                                
-- stdout --
	* [no-preload-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-364000" primary control-plane node in "no-preload-364000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-364000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:25:58.131103    5412 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:25:58.131219    5412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:58.131222    5412 out.go:304] Setting ErrFile to fd 2...
	I0731 10:25:58.131224    5412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:25:58.131369    5412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:25:58.132407    5412 out.go:298] Setting JSON to false
	I0731 10:25:58.148298    5412 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3322,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:25:58.148365    5412 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:25:58.153391    5412 out.go:177] * [no-preload-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:25:58.160441    5412 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:25:58.160492    5412 notify.go:220] Checking for updates...
	I0731 10:25:58.166283    5412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:25:58.169392    5412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:25:58.172343    5412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:25:58.173778    5412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:25:58.177300    5412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:25:58.180726    5412 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:58.180786    5412 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:25:58.180839    5412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:25:58.185127    5412 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:25:58.192303    5412 start.go:297] selected driver: qemu2
	I0731 10:25:58.192309    5412 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:25:58.192314    5412 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:25:58.194548    5412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:25:58.197348    5412 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:25:58.200420    5412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:25:58.200466    5412 cni.go:84] Creating CNI manager for ""
	I0731 10:25:58.200474    5412 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:25:58.200478    5412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:25:58.200505    5412 start.go:340] cluster config:
	{Name:no-preload-364000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:25:58.204192    5412 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.210348    5412 out.go:177] * Starting "no-preload-364000" primary control-plane node in "no-preload-364000" cluster
	I0731 10:25:58.214324    5412 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 10:25:58.214408    5412 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/no-preload-364000/config.json ...
	I0731 10:25:58.214427    5412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/no-preload-364000/config.json: {Name:mkedb3df666c2f1b53d0158415bf4e252525a8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:25:58.214426    5412 cache.go:107] acquiring lock: {Name:mk48f8ef768db3d0a1caa01f5356c0d6aeb7c7f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214445    5412 cache.go:107] acquiring lock: {Name:mk22d1a422babff7a5b1800f76c43064fc5b119b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214445    5412 cache.go:107] acquiring lock: {Name:mk179123e6e9d1388c59801eb01a835c12439250 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214493    5412 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 10:25:58.214503    5412 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79µs
	I0731 10:25:58.214509    5412 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 10:25:58.214518    5412 cache.go:107] acquiring lock: {Name:mkc620d26e35b36e3e50ab3c9920b4709ba407aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214610    5412 cache.go:107] acquiring lock: {Name:mk94713b51f1b488a1157271c0a6b7cd7460e27f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214655    5412 cache.go:107] acquiring lock: {Name:mk6acab31ff0f8e06469bb0d87fd128f372e46bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214680    5412 start.go:360] acquireMachinesLock for no-preload-364000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:25:58.214672    5412 cache.go:107] acquiring lock: {Name:mkcd72e8c37914da7458b1a2e6ac0ac959cabbef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214749    5412 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 10:25:58.214739    5412 cache.go:107] acquiring lock: {Name:mkf851061d3ada9620f787da93a719c0e494aea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:25:58.214750    5412 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 10:25:58.214765    5412 start.go:364] duration metric: took 68.375µs to acquireMachinesLock for "no-preload-364000"
	I0731 10:25:58.214762    5412 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 10:25:58.214818    5412 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 10:25:58.214834    5412 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 10:25:58.214877    5412 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 10:25:58.214841    5412 start.go:93] Provisioning new machine with config: &{Name:no-preload-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:25:58.214946    5412 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:25:58.214965    5412 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 10:25:58.219332    5412 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:25:58.223871    5412 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 10:25:58.226634    5412 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 10:25:58.226713    5412 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 10:25:58.226764    5412 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 10:25:58.226810    5412 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 10:25:58.226891    5412 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 10:25:58.228504    5412 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 10:25:58.236589    5412 start.go:159] libmachine.API.Create for "no-preload-364000" (driver="qemu2")
	I0731 10:25:58.236606    5412 client.go:168] LocalClient.Create starting
	I0731 10:25:58.236668    5412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:25:58.236695    5412 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:58.236704    5412 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:58.236743    5412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:25:58.236765    5412 main.go:141] libmachine: Decoding PEM data...
	I0731 10:25:58.236777    5412 main.go:141] libmachine: Parsing certificate...
	I0731 10:25:58.237068    5412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:25:58.393624    5412 main.go:141] libmachine: Creating SSH key...
	I0731 10:25:58.608795    5412 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0731 10:25:58.623084    5412 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 10:25:58.634767    5412 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 10:25:58.641444    5412 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0731 10:25:58.673416    5412 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 10:25:58.679516    5412 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 10:25:58.716534    5412 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 10:25:58.716547    5412 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 502.003542ms
	I0731 10:25:58.716552    5412 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 10:25:58.723002    5412 cache.go:162] opening:  /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 10:25:58.764028    5412 main.go:141] libmachine: Creating Disk image...
	I0731 10:25:58.764037    5412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:25:58.764185    5412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:25:58.773398    5412 main.go:141] libmachine: STDOUT: 
	I0731 10:25:58.773416    5412 main.go:141] libmachine: STDERR: 
	I0731 10:25:58.773455    5412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2 +20000M
	I0731 10:25:58.781564    5412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:25:58.781578    5412 main.go:141] libmachine: STDERR: 
	I0731 10:25:58.781592    5412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:25:58.781598    5412 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:25:58.781612    5412 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:25:58.781667    5412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:30:76:ca:0e:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:25:58.783498    5412 main.go:141] libmachine: STDOUT: 
	I0731 10:25:58.783512    5412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:25:58.783533    5412 client.go:171] duration metric: took 546.93075ms to LocalClient.Create
	I0731 10:26:00.783751    5412 start.go:128] duration metric: took 2.568805834s to createHost
	I0731 10:26:00.783852    5412 start.go:83] releasing machines lock for "no-preload-364000", held for 2.569087833s
	W0731 10:26:00.783912    5412 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:00.802045    5412 out.go:177] * Deleting "no-preload-364000" in qemu2 ...
	W0731 10:26:00.833202    5412 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:00.833246    5412 start.go:729] Will try again in 5 seconds ...
	I0731 10:26:01.392103    5412 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 10:26:01.392163    5412 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.177600375s
	I0731 10:26:01.392203    5412 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 10:26:01.834981    5412 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 10:26:01.835023    5412 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.620442958s
	I0731 10:26:01.835046    5412 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 10:26:02.606891    5412 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 10:26:02.606953    5412 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.392341792s
	I0731 10:26:02.606978    5412 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 10:26:02.708595    5412 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 10:26:02.708659    5412 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.494301042s
	I0731 10:26:02.708685    5412 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 10:26:02.815101    5412 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 10:26:02.815159    5412 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.600785291s
	I0731 10:26:02.815195    5412 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 10:26:05.833357    5412 start.go:360] acquireMachinesLock for no-preload-364000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:05.833867    5412 start.go:364] duration metric: took 423.042µs to acquireMachinesLock for "no-preload-364000"
	I0731 10:26:05.834019    5412 start.go:93] Provisioning new machine with config: &{Name:no-preload-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:26:05.834257    5412 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:26:05.844752    5412 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:26:05.896806    5412 start.go:159] libmachine.API.Create for "no-preload-364000" (driver="qemu2")
	I0731 10:26:05.896875    5412 client.go:168] LocalClient.Create starting
	I0731 10:26:05.897061    5412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:26:05.897145    5412 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:05.897160    5412 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:05.897225    5412 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:26:05.897273    5412 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:05.897287    5412 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:05.897809    5412 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:26:06.058687    5412 main.go:141] libmachine: Creating SSH key...
	I0731 10:26:06.066147    5412 cache.go:157] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 10:26:06.066157    5412 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 7.8517655s
	I0731 10:26:06.066163    5412 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 10:26:06.066173    5412 cache.go:87] Successfully saved all images to host disk.
	I0731 10:26:06.119683    5412 main.go:141] libmachine: Creating Disk image...
	I0731 10:26:06.119688    5412 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:26:06.119854    5412 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:26:06.129065    5412 main.go:141] libmachine: STDOUT: 
	I0731 10:26:06.129081    5412 main.go:141] libmachine: STDERR: 
	I0731 10:26:06.129130    5412 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2 +20000M
	I0731 10:26:06.137042    5412 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:26:06.137054    5412 main.go:141] libmachine: STDERR: 
	I0731 10:26:06.137064    5412 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:26:06.137070    5412 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:26:06.137081    5412 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:06.137127    5412 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:3e:3e:90:b3:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:26:06.138907    5412 main.go:141] libmachine: STDOUT: 
	I0731 10:26:06.138927    5412 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:06.138938    5412 client.go:171] duration metric: took 242.061ms to LocalClient.Create
	I0731 10:26:08.140381    5412 start.go:128] duration metric: took 2.306132958s to createHost
	I0731 10:26:08.140426    5412 start.go:83] releasing machines lock for "no-preload-364000", held for 2.306570667s
	W0731 10:26:08.140707    5412 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-364000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:08.152230    5412 out.go:177] 
	W0731 10:26:08.158332    5412 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:08.158356    5412 out.go:239] * 
	* 
	W0731 10:26:08.160992    5412 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:26:08.172333    5412 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-364000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (56.13675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-364000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-364000 create -f testdata/busybox.yaml: exit status 1 (34.421042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-364000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-364000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (29.318792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (29.91525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-364000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-364000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-364000 describe deploy/metrics-server -n kube-system: exit status 1 (27.509958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-364000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-364000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (29.230459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-364000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-364000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.182710375s)

                                                
                                                
-- stdout --
	* [no-preload-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-364000" primary control-plane node in "no-preload-364000" cluster
	* Restarting existing qemu2 VM for "no-preload-364000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-364000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:12.063710    5492 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:12.063818    5492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:12.063822    5492 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:12.063825    5492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:12.063962    5492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:12.064958    5492 out.go:298] Setting JSON to false
	I0731 10:26:12.081358    5492 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3336,"bootTime":1722443436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:26:12.081423    5492 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:26:12.086829    5492 out.go:177] * [no-preload-364000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:26:12.094817    5492 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:26:12.094857    5492 notify.go:220] Checking for updates...
	I0731 10:26:12.100230    5492 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:26:12.103790    5492 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:26:12.106804    5492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:26:12.109849    5492 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:26:12.112796    5492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:26:12.116057    5492 config.go:182] Loaded profile config "no-preload-364000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 10:26:12.116327    5492 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:26:12.120860    5492 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:26:12.127759    5492 start.go:297] selected driver: qemu2
	I0731 10:26:12.127765    5492 start.go:901] validating driver "qemu2" against &{Name:no-preload-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-364000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:12.127824    5492 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:26:12.130242    5492 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:26:12.130286    5492 cni.go:84] Creating CNI manager for ""
	I0731 10:26:12.130298    5492 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:26:12.130331    5492 start.go:340] cluster config:
	{Name:no-preload-364000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-364000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:12.134041    5492 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.137835    5492 out.go:177] * Starting "no-preload-364000" primary control-plane node in "no-preload-364000" cluster
	I0731 10:26:12.145893    5492 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 10:26:12.145968    5492 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/no-preload-364000/config.json ...
	I0731 10:26:12.145984    5492 cache.go:107] acquiring lock: {Name:mk179123e6e9d1388c59801eb01a835c12439250 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.145987    5492 cache.go:107] acquiring lock: {Name:mk48f8ef768db3d0a1caa01f5356c0d6aeb7c7f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.146015    5492 cache.go:107] acquiring lock: {Name:mk22d1a422babff7a5b1800f76c43064fc5b119b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.146043    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 10:26:12.146049    5492 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 63.708µs
	I0731 10:26:12.146050    5492 cache.go:107] acquiring lock: {Name:mk94713b51f1b488a1157271c0a6b7cd7460e27f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.146056    5492 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 10:26:12.146042    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 10:26:12.146064    5492 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 82.709µs
	I0731 10:26:12.146067    5492 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 10:26:12.146088    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 10:26:12.146083    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 10:26:12.146099    5492 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 104.625µs
	I0731 10:26:12.146100    5492 cache.go:107] acquiring lock: {Name:mkf851061d3ada9620f787da93a719c0e494aea0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.146106    5492 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 10:26:12.146075    5492 cache.go:107] acquiring lock: {Name:mkc620d26e35b36e3e50ab3c9920b4709ba407aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.146094    5492 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 44.125µs
	I0731 10:26:12.146167    5492 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 10:26:12.146135    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 10:26:12.146181    5492 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 82.166µs
	I0731 10:26:12.146185    5492 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 10:26:12.146132    5492 cache.go:107] acquiring lock: {Name:mkcd72e8c37914da7458b1a2e6ac0ac959cabbef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.146192    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 10:26:12.146199    5492 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 124.708µs
	I0731 10:26:12.146204    5492 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 10:26:12.146208    5492 cache.go:107] acquiring lock: {Name:mk6acab31ff0f8e06469bb0d87fd128f372e46bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:12.146231    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 10:26:12.146236    5492 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 134.083µs
	I0731 10:26:12.146239    5492 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 10:26:12.146259    5492 cache.go:115] /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 10:26:12.146262    5492 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 122.041µs
	I0731 10:26:12.146268    5492 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 10:26:12.146272    5492 cache.go:87] Successfully saved all images to host disk.
	I0731 10:26:12.146388    5492 start.go:360] acquireMachinesLock for no-preload-364000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:12.146418    5492 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "no-preload-364000"
	I0731 10:26:12.146427    5492 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:26:12.146434    5492 fix.go:54] fixHost starting: 
	I0731 10:26:12.146558    5492 fix.go:112] recreateIfNeeded on no-preload-364000: state=Stopped err=<nil>
	W0731 10:26:12.146567    5492 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:26:12.154770    5492 out.go:177] * Restarting existing qemu2 VM for "no-preload-364000" ...
	I0731 10:26:12.158779    5492 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:12.158816    5492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:3e:3e:90:b3:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:26:12.160946    5492 main.go:141] libmachine: STDOUT: 
	I0731 10:26:12.160968    5492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:12.160995    5492 fix.go:56] duration metric: took 14.562834ms for fixHost
	I0731 10:26:12.161030    5492 start.go:83] releasing machines lock for "no-preload-364000", held for 14.607792ms
	W0731 10:26:12.161038    5492 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:12.161073    5492 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:12.161079    5492 start.go:729] Will try again in 5 seconds ...
	I0731 10:26:17.161613    5492 start.go:360] acquireMachinesLock for no-preload-364000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:17.161988    5492 start.go:364] duration metric: took 287.916µs to acquireMachinesLock for "no-preload-364000"
	I0731 10:26:17.162107    5492 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:26:17.162129    5492 fix.go:54] fixHost starting: 
	I0731 10:26:17.162837    5492 fix.go:112] recreateIfNeeded on no-preload-364000: state=Stopped err=<nil>
	W0731 10:26:17.162866    5492 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:26:17.167142    5492 out.go:177] * Restarting existing qemu2 VM for "no-preload-364000" ...
	I0731 10:26:17.175285    5492 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:17.175493    5492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:3e:3e:90:b3:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/no-preload-364000/disk.qcow2
	I0731 10:26:17.184458    5492 main.go:141] libmachine: STDOUT: 
	I0731 10:26:17.184536    5492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:17.184606    5492 fix.go:56] duration metric: took 22.477125ms for fixHost
	I0731 10:26:17.184773    5492 start.go:83] releasing machines lock for "no-preload-364000", held for 22.764333ms
	W0731 10:26:17.184995    5492 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-364000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-364000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:17.192202    5492 out.go:177] 
	W0731 10:26:17.195320    5492 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:17.195365    5492 out.go:239] * 
	* 
	W0731 10:26:17.198121    5492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:26:17.209220    5492 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-364000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (67.227666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-364000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (32.421791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-364000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-364000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0731 10:26:17.343904    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-364000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.787791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-364000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-364000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (29.466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-364000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (29.682167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-364000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-364000 --alsologtostderr -v=1: exit status 83 (40.945625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-364000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-364000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:17.474826    5513 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:17.474990    5513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:17.474993    5513 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:17.474995    5513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:17.475165    5513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:17.475384    5513 out.go:298] Setting JSON to false
	I0731 10:26:17.475389    5513 mustload.go:65] Loading cluster: no-preload-364000
	I0731 10:26:17.475600    5513 config.go:182] Loaded profile config "no-preload-364000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 10:26:17.480231    5513 out.go:177] * The control-plane node no-preload-364000 host is not running: state=Stopped
	I0731 10:26:17.483238    5513 out.go:177]   To start a cluster, run: "minikube start -p no-preload-364000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-364000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (28.762917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (29.339458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-364000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-134000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-134000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.786437541s)

                                                
                                                
-- stdout --
	* [embed-certs-134000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-134000" primary control-plane node in "embed-certs-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:17.786699    5530 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:17.786818    5530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:17.786825    5530 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:17.786827    5530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:17.786960    5530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:17.788049    5530 out.go:298] Setting JSON to false
	I0731 10:26:17.803972    5530 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3341,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:26:17.804035    5530 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:26:17.808320    5530 out.go:177] * [embed-certs-134000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:26:17.815216    5530 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:26:17.815241    5530 notify.go:220] Checking for updates...
	I0731 10:26:17.822215    5530 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:26:17.825233    5530 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:26:17.828242    5530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:26:17.831254    5530 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:26:17.834143    5530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:26:17.837500    5530 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:17.837566    5530 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:17.837620    5530 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:26:17.841250    5530 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:26:17.848162    5530 start.go:297] selected driver: qemu2
	I0731 10:26:17.848168    5530 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:26:17.848174    5530 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:26:17.850328    5530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:26:17.853213    5530 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:26:17.856195    5530 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:26:17.856225    5530 cni.go:84] Creating CNI manager for ""
	I0731 10:26:17.856232    5530 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:26:17.856239    5530 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:26:17.856263    5530 start.go:340] cluster config:
	{Name:embed-certs-134000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:17.859989    5530 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:17.867230    5530 out.go:177] * Starting "embed-certs-134000" primary control-plane node in "embed-certs-134000" cluster
	I0731 10:26:17.871219    5530 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:26:17.871237    5530 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:26:17.871252    5530 cache.go:56] Caching tarball of preloaded images
	I0731 10:26:17.871316    5530 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:26:17.871324    5530 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:26:17.871398    5530 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/embed-certs-134000/config.json ...
	I0731 10:26:17.871409    5530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/embed-certs-134000/config.json: {Name:mk283beae3a0a8f3ce06a9dd3511f1649901dfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:26:17.871762    5530 start.go:360] acquireMachinesLock for embed-certs-134000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:17.871795    5530 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "embed-certs-134000"
	I0731 10:26:17.871806    5530 start.go:93] Provisioning new machine with config: &{Name:embed-certs-134000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:26:17.871839    5530 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:26:17.880159    5530 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:26:17.897569    5530 start.go:159] libmachine.API.Create for "embed-certs-134000" (driver="qemu2")
	I0731 10:26:17.897594    5530 client.go:168] LocalClient.Create starting
	I0731 10:26:17.897653    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:26:17.897688    5530 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:17.897697    5530 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:17.897741    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:26:17.897763    5530 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:17.897770    5530 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:17.898156    5530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:26:18.048790    5530 main.go:141] libmachine: Creating SSH key...
	I0731 10:26:18.084218    5530 main.go:141] libmachine: Creating Disk image...
	I0731 10:26:18.084223    5530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:26:18.084412    5530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:18.093619    5530 main.go:141] libmachine: STDOUT: 
	I0731 10:26:18.093638    5530 main.go:141] libmachine: STDERR: 
	I0731 10:26:18.093684    5530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2 +20000M
	I0731 10:26:18.101459    5530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:26:18.101480    5530 main.go:141] libmachine: STDERR: 
	I0731 10:26:18.101492    5530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:18.101507    5530 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:26:18.101518    5530 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:18.101544    5530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:2f:85:b1:70:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:18.103166    5530 main.go:141] libmachine: STDOUT: 
	I0731 10:26:18.103183    5530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:18.103201    5530 client.go:171] duration metric: took 205.603917ms to LocalClient.Create
	I0731 10:26:20.105347    5530 start.go:128] duration metric: took 2.233523333s to createHost
	I0731 10:26:20.105399    5530 start.go:83] releasing machines lock for "embed-certs-134000", held for 2.233630542s
	W0731 10:26:20.105487    5530 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:20.120684    5530 out.go:177] * Deleting "embed-certs-134000" in qemu2 ...
	W0731 10:26:20.148031    5530 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:20.148057    5530 start.go:729] Will try again in 5 seconds ...
	I0731 10:26:25.150213    5530 start.go:360] acquireMachinesLock for embed-certs-134000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:25.150582    5530 start.go:364] duration metric: took 284.167µs to acquireMachinesLock for "embed-certs-134000"
	I0731 10:26:25.150666    5530 start.go:93] Provisioning new machine with config: &{Name:embed-certs-134000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:26:25.150873    5530 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:26:25.161351    5530 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:26:25.209052    5530 start.go:159] libmachine.API.Create for "embed-certs-134000" (driver="qemu2")
	I0731 10:26:25.209103    5530 client.go:168] LocalClient.Create starting
	I0731 10:26:25.209236    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:26:25.209299    5530 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:25.209322    5530 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:25.209395    5530 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:26:25.209439    5530 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:25.209455    5530 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:25.210031    5530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:26:25.370020    5530 main.go:141] libmachine: Creating SSH key...
	I0731 10:26:25.483196    5530 main.go:141] libmachine: Creating Disk image...
	I0731 10:26:25.483202    5530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:26:25.483386    5530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:25.492503    5530 main.go:141] libmachine: STDOUT: 
	I0731 10:26:25.492522    5530 main.go:141] libmachine: STDERR: 
	I0731 10:26:25.492577    5530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2 +20000M
	I0731 10:26:25.500479    5530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:26:25.500496    5530 main.go:141] libmachine: STDERR: 
	I0731 10:26:25.500506    5530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:25.500511    5530 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:26:25.500525    5530 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:25.500562    5530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:49:c2:21:7d:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:25.502248    5530 main.go:141] libmachine: STDOUT: 
	I0731 10:26:25.502276    5530 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:25.502289    5530 client.go:171] duration metric: took 293.186417ms to LocalClient.Create
	I0731 10:26:27.504441    5530 start.go:128] duration metric: took 2.353576s to createHost
	I0731 10:26:27.504490    5530 start.go:83] releasing machines lock for "embed-certs-134000", held for 2.353919792s
	W0731 10:26:27.504812    5530 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:27.515369    5530 out.go:177] 
	W0731 10:26:27.519430    5530 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:27.519479    5530 out.go:239] * 
	* 
	W0731 10:26:27.522012    5530 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:26:27.531305    5530 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-134000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (66.352584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-134000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-134000 create -f testdata/busybox.yaml: exit status 1 (29.546542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-134000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-134000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (30.139792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (29.416416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-134000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-134000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-134000 describe deploy/metrics-server -n kube-system: exit status 1 (26.904958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-134000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-134000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (29.865125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-134000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-134000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.181806084s)

                                                
                                                
-- stdout --
	* [embed-certs-134000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-134000" primary control-plane node in "embed-certs-134000" cluster
	* Restarting existing qemu2 VM for "embed-certs-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:29.970237    5573 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:29.970376    5573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:29.970379    5573 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:29.970382    5573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:29.970498    5573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:29.971520    5573 out.go:298] Setting JSON to false
	I0731 10:26:29.987431    5573 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3353,"bootTime":1722443436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:26:29.987506    5573 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:26:29.992085    5573 out.go:177] * [embed-certs-134000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:26:30.000037    5573 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:26:30.000091    5573 notify.go:220] Checking for updates...
	I0731 10:26:30.007089    5573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:26:30.010024    5573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:26:30.013117    5573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:26:30.016099    5573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:26:30.019054    5573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:26:30.022432    5573 config.go:182] Loaded profile config "embed-certs-134000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:30.022743    5573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:26:30.027090    5573 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:26:30.034111    5573 start.go:297] selected driver: qemu2
	I0731 10:26:30.034116    5573 start.go:901] validating driver "qemu2" against &{Name:embed-certs-134000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-134000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:30.034185    5573 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:26:30.036330    5573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:26:30.036371    5573 cni.go:84] Creating CNI manager for ""
	I0731 10:26:30.036378    5573 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:26:30.036405    5573 start.go:340] cluster config:
	{Name:embed-certs-134000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-134000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:30.039807    5573 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:30.046025    5573 out.go:177] * Starting "embed-certs-134000" primary control-plane node in "embed-certs-134000" cluster
	I0731 10:26:30.050033    5573 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:26:30.050047    5573 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:26:30.050053    5573 cache.go:56] Caching tarball of preloaded images
	I0731 10:26:30.050114    5573 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:26:30.050119    5573 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:26:30.050172    5573 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/embed-certs-134000/config.json ...
	I0731 10:26:30.050633    5573 start.go:360] acquireMachinesLock for embed-certs-134000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:30.050661    5573 start.go:364] duration metric: took 21.583µs to acquireMachinesLock for "embed-certs-134000"
	I0731 10:26:30.050669    5573 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:26:30.050672    5573 fix.go:54] fixHost starting: 
	I0731 10:26:30.050783    5573 fix.go:112] recreateIfNeeded on embed-certs-134000: state=Stopped err=<nil>
	W0731 10:26:30.050791    5573 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:26:30.059054    5573 out.go:177] * Restarting existing qemu2 VM for "embed-certs-134000" ...
	I0731 10:26:30.062856    5573 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:30.062891    5573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:49:c2:21:7d:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:30.064795    5573 main.go:141] libmachine: STDOUT: 
	I0731 10:26:30.064815    5573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:30.064844    5573 fix.go:56] duration metric: took 14.170542ms for fixHost
	I0731 10:26:30.064848    5573 start.go:83] releasing machines lock for "embed-certs-134000", held for 14.183375ms
	W0731 10:26:30.064855    5573 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:30.064885    5573 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:30.064890    5573 start.go:729] Will try again in 5 seconds ...
	I0731 10:26:35.067019    5573 start.go:360] acquireMachinesLock for embed-certs-134000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:35.067354    5573 start.go:364] duration metric: took 260.875µs to acquireMachinesLock for "embed-certs-134000"
	I0731 10:26:35.067502    5573 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:26:35.067519    5573 fix.go:54] fixHost starting: 
	I0731 10:26:35.068244    5573 fix.go:112] recreateIfNeeded on embed-certs-134000: state=Stopped err=<nil>
	W0731 10:26:35.068275    5573 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:26:35.076606    5573 out.go:177] * Restarting existing qemu2 VM for "embed-certs-134000" ...
	I0731 10:26:35.080597    5573 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:35.080825    5573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:49:c2:21:7d:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/embed-certs-134000/disk.qcow2
	I0731 10:26:35.089544    5573 main.go:141] libmachine: STDOUT: 
	I0731 10:26:35.089772    5573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:35.089834    5573 fix.go:56] duration metric: took 22.316334ms for fixHost
	I0731 10:26:35.089851    5573 start.go:83] releasing machines lock for "embed-certs-134000", held for 22.472334ms
	W0731 10:26:35.089986    5573 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-134000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-134000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:35.097518    5573 out.go:177] 
	W0731 10:26:35.101633    5573 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:35.101662    5573 out.go:239] * 
	* 
	W0731 10:26:35.104144    5573 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:26:35.111623    5573 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-134000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (67.461958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-134000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (32.468625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-134000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-134000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-134000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.675875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-134000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-134000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (29.935166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-134000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (28.77275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-134000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-134000 --alsologtostderr -v=1: exit status 83 (40.670208ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-134000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-134000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:35.379001    5592 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:35.379167    5592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:35.379171    5592 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:35.379173    5592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:35.379312    5592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:35.379540    5592 out.go:298] Setting JSON to false
	I0731 10:26:35.379546    5592 mustload.go:65] Loading cluster: embed-certs-134000
	I0731 10:26:35.379763    5592 config.go:182] Loaded profile config "embed-certs-134000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:35.383823    5592 out.go:177] * The control-plane node embed-certs-134000 host is not running: state=Stopped
	I0731 10:26:35.387923    5592 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-134000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-134000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (28.919416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (29.459917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-134000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-833000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-833000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.067081958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-833000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-833000" primary control-plane node in "default-k8s-diff-port-833000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-833000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:35.798038    5616 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:35.798175    5616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:35.798178    5616 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:35.798180    5616 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:35.798305    5616 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:35.799348    5616 out.go:298] Setting JSON to false
	I0731 10:26:35.815229    5616 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3359,"bootTime":1722443436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:26:35.815288    5616 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:26:35.819956    5616 out.go:177] * [default-k8s-diff-port-833000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:26:35.826917    5616 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:26:35.826979    5616 notify.go:220] Checking for updates...
	I0731 10:26:35.834885    5616 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:26:35.837918    5616 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:26:35.840878    5616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:26:35.843861    5616 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:26:35.846891    5616 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:26:35.850254    5616 config.go:182] Loaded profile config "cert-expiration-013000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:35.850321    5616 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:35.850364    5616 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:26:35.854806    5616 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:26:35.861944    5616 start.go:297] selected driver: qemu2
	I0731 10:26:35.861951    5616 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:26:35.861959    5616 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:26:35.864137    5616 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 10:26:35.866889    5616 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:26:35.870059    5616 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:26:35.870077    5616 cni.go:84] Creating CNI manager for ""
	I0731 10:26:35.870085    5616 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:26:35.870089    5616 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:26:35.870124    5616 start.go:340] cluster config:
	{Name:default-k8s-diff-port-833000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-833000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:35.873839    5616 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:35.882880    5616 out.go:177] * Starting "default-k8s-diff-port-833000" primary control-plane node in "default-k8s-diff-port-833000" cluster
	I0731 10:26:35.886871    5616 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:26:35.886889    5616 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:26:35.886904    5616 cache.go:56] Caching tarball of preloaded images
	I0731 10:26:35.886978    5616 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:26:35.886985    5616 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:26:35.887041    5616 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/default-k8s-diff-port-833000/config.json ...
	I0731 10:26:35.887053    5616 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/default-k8s-diff-port-833000/config.json: {Name:mke7797023504f03285ed8791196df29bbd3288e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:26:35.887477    5616 start.go:360] acquireMachinesLock for default-k8s-diff-port-833000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:35.887515    5616 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "default-k8s-diff-port-833000"
	I0731 10:26:35.887526    5616 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-833000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-833000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:26:35.887554    5616 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:26:35.890902    5616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:26:35.908634    5616 start.go:159] libmachine.API.Create for "default-k8s-diff-port-833000" (driver="qemu2")
	I0731 10:26:35.908672    5616 client.go:168] LocalClient.Create starting
	I0731 10:26:35.908739    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:26:35.908775    5616 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:35.908784    5616 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:35.908821    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:26:35.908845    5616 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:35.908853    5616 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:35.909292    5616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:26:36.062128    5616 main.go:141] libmachine: Creating SSH key...
	I0731 10:26:36.278020    5616 main.go:141] libmachine: Creating Disk image...
	I0731 10:26:36.278028    5616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:26:36.278258    5616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:36.288121    5616 main.go:141] libmachine: STDOUT: 
	I0731 10:26:36.288140    5616 main.go:141] libmachine: STDERR: 
	I0731 10:26:36.288203    5616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2 +20000M
	I0731 10:26:36.296031    5616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:26:36.296046    5616 main.go:141] libmachine: STDERR: 
	I0731 10:26:36.296065    5616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:36.296074    5616 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:26:36.296089    5616 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:36.296125    5616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:0e:fe:5a:73:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:36.297685    5616 main.go:141] libmachine: STDOUT: 
	I0731 10:26:36.297701    5616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:36.297720    5616 client.go:171] duration metric: took 389.049542ms to LocalClient.Create
	I0731 10:26:38.299885    5616 start.go:128] duration metric: took 2.41234625s to createHost
	I0731 10:26:38.300076    5616 start.go:83] releasing machines lock for "default-k8s-diff-port-833000", held for 2.412459416s
	W0731 10:26:38.300142    5616 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:38.313725    5616 out.go:177] * Deleting "default-k8s-diff-port-833000" in qemu2 ...
	W0731 10:26:38.344409    5616 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:38.344433    5616 start.go:729] Will try again in 5 seconds ...
	I0731 10:26:43.346492    5616 start.go:360] acquireMachinesLock for default-k8s-diff-port-833000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:43.355793    5616 start.go:364] duration metric: took 9.232167ms to acquireMachinesLock for "default-k8s-diff-port-833000"
	I0731 10:26:43.355845    5616 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-833000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-833000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:26:43.356088    5616 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:26:43.367799    5616 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:26:43.417281    5616 start.go:159] libmachine.API.Create for "default-k8s-diff-port-833000" (driver="qemu2")
	I0731 10:26:43.417334    5616 client.go:168] LocalClient.Create starting
	I0731 10:26:43.417436    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:26:43.417509    5616 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:43.417528    5616 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:43.417584    5616 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:26:43.417627    5616 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:43.417642    5616 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:43.418177    5616 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:26:43.677453    5616 main.go:141] libmachine: Creating SSH key...
	I0731 10:26:43.777518    5616 main.go:141] libmachine: Creating Disk image...
	I0731 10:26:43.777525    5616 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:26:43.777682    5616 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:43.786531    5616 main.go:141] libmachine: STDOUT: 
	I0731 10:26:43.786548    5616 main.go:141] libmachine: STDERR: 
	I0731 10:26:43.786593    5616 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2 +20000M
	I0731 10:26:43.794392    5616 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:26:43.794405    5616 main.go:141] libmachine: STDERR: 
	I0731 10:26:43.794418    5616 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:43.794421    5616 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:26:43.794431    5616 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:43.794475    5616 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1f:80:cf:75:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:43.796092    5616 main.go:141] libmachine: STDOUT: 
	I0731 10:26:43.796106    5616 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:43.796117    5616 client.go:171] duration metric: took 378.784208ms to LocalClient.Create
	I0731 10:26:45.798246    5616 start.go:128] duration metric: took 2.44217275s to createHost
	I0731 10:26:45.798473    5616 start.go:83] releasing machines lock for "default-k8s-diff-port-833000", held for 2.442691209s
	W0731 10:26:45.798782    5616 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-833000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-833000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:45.807630    5616 out.go:177] 
	W0731 10:26:45.815808    5616 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:45.815867    5616 out.go:239] * 
	* 
	W0731 10:26:45.818238    5616 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:26:45.828658    5616 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-833000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (49.962125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-766000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-766000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (11.965681584s)

                                                
                                                
-- stdout --
	* [newest-cni-766000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-766000" primary control-plane node in "newest-cni-766000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-766000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:43.582345    5640 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:43.582519    5640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:43.582522    5640 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:43.582525    5640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:43.582647    5640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:43.584022    5640 out.go:298] Setting JSON to false
	I0731 10:26:43.603534    5640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3367,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:26:43.603620    5640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:26:43.610827    5640 out.go:177] * [newest-cni-766000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:26:43.621947    5640 notify.go:220] Checking for updates...
	I0731 10:26:43.627787    5640 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:26:43.634661    5640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:26:43.640769    5640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:26:43.655731    5640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:26:43.664688    5640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:26:43.672722    5640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:26:43.677152    5640 config.go:182] Loaded profile config "default-k8s-diff-port-833000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:43.677227    5640 config.go:182] Loaded profile config "multinode-759000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:43.677287    5640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:26:43.682729    5640 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 10:26:43.685655    5640 start.go:297] selected driver: qemu2
	I0731 10:26:43.685661    5640 start.go:901] validating driver "qemu2" against <nil>
	I0731 10:26:43.685666    5640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:26:43.688114    5640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 10:26:43.688138    5640 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 10:26:43.692736    5640 out.go:177] * Automatically selected the socket_vmnet network
	I0731 10:26:43.700768    5640 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 10:26:43.700780    5640 cni.go:84] Creating CNI manager for ""
	I0731 10:26:43.700787    5640 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:26:43.700795    5640 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 10:26:43.700839    5640 start.go:340] cluster config:
	{Name:newest-cni-766000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:43.704207    5640 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:43.711538    5640 out.go:177] * Starting "newest-cni-766000" primary control-plane node in "newest-cni-766000" cluster
	I0731 10:26:43.715705    5640 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 10:26:43.715717    5640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 10:26:43.715726    5640 cache.go:56] Caching tarball of preloaded images
	I0731 10:26:43.715772    5640 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:26:43.715777    5640 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 10:26:43.715830    5640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/newest-cni-766000/config.json ...
	I0731 10:26:43.715839    5640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/newest-cni-766000/config.json: {Name:mkc3536d1acbd3d83541b4f0f09bacd3efaf2f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:26:43.716086    5640 start.go:360] acquireMachinesLock for newest-cni-766000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:45.798633    5640 start.go:364] duration metric: took 2.082556s to acquireMachinesLock for "newest-cni-766000"
	I0731 10:26:45.798788    5640 start.go:93] Provisioning new machine with config: &{Name:newest-cni-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:26:45.799041    5640 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:26:45.807632    5640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:26:45.855753    5640 start.go:159] libmachine.API.Create for "newest-cni-766000" (driver="qemu2")
	I0731 10:26:45.855799    5640 client.go:168] LocalClient.Create starting
	I0731 10:26:45.855909    5640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:26:45.855970    5640 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:45.855986    5640 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:45.856051    5640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:26:45.856095    5640 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:45.856112    5640 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:45.856718    5640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:26:46.017809    5640 main.go:141] libmachine: Creating SSH key...
	I0731 10:26:46.121259    5640 main.go:141] libmachine: Creating Disk image...
	I0731 10:26:46.121268    5640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:26:46.121450    5640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:26:46.131063    5640 main.go:141] libmachine: STDOUT: 
	I0731 10:26:46.131094    5640 main.go:141] libmachine: STDERR: 
	I0731 10:26:46.131162    5640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2 +20000M
	I0731 10:26:46.140297    5640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:26:46.140318    5640 main.go:141] libmachine: STDERR: 
	I0731 10:26:46.140346    5640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:26:46.140350    5640 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:26:46.140363    5640 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:46.140391    5640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:f1:37:30:92:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:26:46.142050    5640 main.go:141] libmachine: STDOUT: 
	I0731 10:26:46.142066    5640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:46.142086    5640 client.go:171] duration metric: took 286.285ms to LocalClient.Create
	I0731 10:26:48.142500    5640 start.go:128] duration metric: took 2.343477583s to createHost
	I0731 10:26:48.142519    5640 start.go:83] releasing machines lock for "newest-cni-766000", held for 2.343887833s
	W0731 10:26:48.142529    5640 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:48.150209    5640 out.go:177] * Deleting "newest-cni-766000" in qemu2 ...
	W0731 10:26:48.160835    5640 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:48.160844    5640 start.go:729] Will try again in 5 seconds ...
	I0731 10:26:53.161130    5640 start.go:360] acquireMachinesLock for newest-cni-766000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:53.161574    5640 start.go:364] duration metric: took 362.084µs to acquireMachinesLock for "newest-cni-766000"
	I0731 10:26:53.161690    5640 start.go:93] Provisioning new machine with config: &{Name:newest-cni-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 10:26:53.161941    5640 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 10:26:53.171301    5640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 10:26:53.223469    5640 start.go:159] libmachine.API.Create for "newest-cni-766000" (driver="qemu2")
	I0731 10:26:53.223517    5640 client.go:168] LocalClient.Create starting
	I0731 10:26:53.223650    5640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/ca.pem
	I0731 10:26:53.223747    5640 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:53.223761    5640 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:53.223816    5640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19349-1049/.minikube/certs/cert.pem
	I0731 10:26:53.223864    5640 main.go:141] libmachine: Decoding PEM data...
	I0731 10:26:53.223879    5640 main.go:141] libmachine: Parsing certificate...
	I0731 10:26:53.224506    5640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 10:26:53.387767    5640 main.go:141] libmachine: Creating SSH key...
	I0731 10:26:53.430327    5640 main.go:141] libmachine: Creating Disk image...
	I0731 10:26:53.430332    5640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 10:26:53.430513    5640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:26:53.439695    5640 main.go:141] libmachine: STDOUT: 
	I0731 10:26:53.439709    5640 main.go:141] libmachine: STDERR: 
	I0731 10:26:53.439769    5640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2 +20000M
	I0731 10:26:53.447635    5640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 10:26:53.447649    5640 main.go:141] libmachine: STDERR: 
	I0731 10:26:53.447658    5640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:26:53.447664    5640 main.go:141] libmachine: Starting QEMU VM...
	I0731 10:26:53.447675    5640 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:53.447705    5640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:22:f8:c1:46:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:26:53.449359    5640 main.go:141] libmachine: STDOUT: 
	I0731 10:26:53.449376    5640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:53.449388    5640 client.go:171] duration metric: took 225.870125ms to LocalClient.Create
	I0731 10:26:55.451679    5640 start.go:128] duration metric: took 2.289714917s to createHost
	I0731 10:26:55.451775    5640 start.go:83] releasing machines lock for "newest-cni-766000", held for 2.290213875s
	W0731 10:26:55.452155    5640 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:55.461671    5640 out.go:177] 
	W0731 10:26:55.478781    5640 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:55.478808    5640 out.go:239] * 
	* 
	W0731 10:26:55.481229    5640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:26:55.492503    5640 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-766000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000: exit status 7 (62.748208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-833000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-833000 create -f testdata/busybox.yaml: exit status 1 (30.856ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-833000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-833000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (32.635041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (32.750875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-833000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-833000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-833000 describe deploy/metrics-server -n kube-system: exit status 1 (28.047333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-833000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-833000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (30.112458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-833000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-833000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (7.325588375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-833000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-833000" primary control-plane node in "default-k8s-diff-port-833000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-833000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-833000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:48.236349    5684 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:48.236485    5684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:48.236488    5684 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:48.236491    5684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:48.236631    5684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:48.237827    5684 out.go:298] Setting JSON to false
	I0731 10:26:48.253974    5684 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3372,"bootTime":1722443436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:26:48.254040    5684 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:26:48.259315    5684 out.go:177] * [default-k8s-diff-port-833000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:26:48.266215    5684 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:26:48.266265    5684 notify.go:220] Checking for updates...
	I0731 10:26:48.273183    5684 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:26:48.276240    5684 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:26:48.279219    5684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:26:48.282116    5684 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:26:48.285194    5684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:26:48.288482    5684 config.go:182] Loaded profile config "default-k8s-diff-port-833000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:48.288761    5684 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:26:48.293197    5684 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:26:48.300204    5684 start.go:297] selected driver: qemu2
	I0731 10:26:48.300213    5684 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-833000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-833000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:48.300307    5684 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:26:48.302751    5684 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:26:48.302787    5684 cni.go:84] Creating CNI manager for ""
	I0731 10:26:48.302794    5684 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:26:48.302834    5684 start.go:340] cluster config:
	{Name:default-k8s-diff-port-833000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-833000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:48.306456    5684 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:48.314172    5684 out.go:177] * Starting "default-k8s-diff-port-833000" primary control-plane node in "default-k8s-diff-port-833000" cluster
	I0731 10:26:48.317116    5684 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 10:26:48.317132    5684 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 10:26:48.317144    5684 cache.go:56] Caching tarball of preloaded images
	I0731 10:26:48.317208    5684 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:26:48.317214    5684 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 10:26:48.317294    5684 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/default-k8s-diff-port-833000/config.json ...
	I0731 10:26:48.317760    5684 start.go:360] acquireMachinesLock for default-k8s-diff-port-833000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:48.317795    5684 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "default-k8s-diff-port-833000"
	I0731 10:26:48.317804    5684 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:26:48.317810    5684 fix.go:54] fixHost starting: 
	I0731 10:26:48.317937    5684 fix.go:112] recreateIfNeeded on default-k8s-diff-port-833000: state=Stopped err=<nil>
	W0731 10:26:48.317947    5684 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:26:48.321242    5684 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-833000" ...
	I0731 10:26:48.329176    5684 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:48.329213    5684 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1f:80:cf:75:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:48.331326    5684 main.go:141] libmachine: STDOUT: 
	I0731 10:26:48.331347    5684 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:48.331374    5684 fix.go:56] duration metric: took 13.565542ms for fixHost
	I0731 10:26:48.331378    5684 start.go:83] releasing machines lock for "default-k8s-diff-port-833000", held for 13.578917ms
	W0731 10:26:48.331387    5684 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:48.331428    5684 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:48.331433    5684 start.go:729] Will try again in 5 seconds ...
	I0731 10:26:53.333424    5684 start.go:360] acquireMachinesLock for default-k8s-diff-port-833000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:55.451967    5684 start.go:364] duration metric: took 2.118479167s to acquireMachinesLock for "default-k8s-diff-port-833000"
	I0731 10:26:55.452213    5684 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:26:55.452255    5684 fix.go:54] fixHost starting: 
	I0731 10:26:55.452976    5684 fix.go:112] recreateIfNeeded on default-k8s-diff-port-833000: state=Stopped err=<nil>
	W0731 10:26:55.453005    5684 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:26:55.472523    5684 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-833000" ...
	I0731 10:26:55.481746    5684 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:55.481953    5684 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1f:80:cf:75:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/default-k8s-diff-port-833000/disk.qcow2
	I0731 10:26:55.490971    5684 main.go:141] libmachine: STDOUT: 
	I0731 10:26:55.491034    5684 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:55.491104    5684 fix.go:56] duration metric: took 38.875875ms for fixHost
	I0731 10:26:55.491120    5684 start.go:83] releasing machines lock for "default-k8s-diff-port-833000", held for 39.054334ms
	W0731 10:26:55.491344    5684 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-833000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-833000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:55.504664    5684 out.go:177] 
	W0731 10:26:55.508745    5684 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:55.508776    5684 out.go:239] * 
	* 
	W0731 10:26:55.511495    5684 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:26:55.525882    5684 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-833000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (53.511209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-833000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (35.932667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-833000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-833000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-833000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.045542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-833000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-833000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (32.802083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-833000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (28.67225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-833000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-833000 --alsologtostderr -v=1: exit status 83 (38.979458ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-833000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-833000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:55.776997    5715 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:55.777215    5715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:55.777218    5715 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:55.777220    5715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:55.777367    5715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:55.777591    5715 out.go:298] Setting JSON to false
	I0731 10:26:55.777597    5715 mustload.go:65] Loading cluster: default-k8s-diff-port-833000
	I0731 10:26:55.777810    5715 config.go:182] Loaded profile config "default-k8s-diff-port-833000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 10:26:55.782246    5715 out.go:177] * The control-plane node default-k8s-diff-port-833000 host is not running: state=Stopped
	I0731 10:26:55.786007    5715 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-833000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-833000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (28.863083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (28.910125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-833000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-766000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
E0731 10:27:03.507334    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-766000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.176414291s)

                                                
                                                
-- stdout --
	* [newest-cni-766000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-766000" primary control-plane node in "newest-cni-766000" cluster
	* Restarting existing qemu2 VM for "newest-cni-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-766000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:26:59.024651    5750 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:26:59.024788    5750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:59.024791    5750 out.go:304] Setting ErrFile to fd 2...
	I0731 10:26:59.024794    5750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:26:59.024926    5750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:26:59.025938    5750 out.go:298] Setting JSON to false
	I0731 10:26:59.041929    5750 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3383,"bootTime":1722443436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 10:26:59.042007    5750 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 10:26:59.045555    5750 out.go:177] * [newest-cni-766000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 10:26:59.051451    5750 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 10:26:59.051501    5750 notify.go:220] Checking for updates...
	I0731 10:26:59.057523    5750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 10:26:59.060529    5750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 10:26:59.063473    5750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:26:59.066525    5750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 10:26:59.067908    5750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:26:59.070725    5750 config.go:182] Loaded profile config "newest-cni-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 10:26:59.070986    5750 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 10:26:59.075488    5750 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 10:26:59.080524    5750 start.go:297] selected driver: qemu2
	I0731 10:26:59.080534    5750 start.go:901] validating driver "qemu2" against &{Name:newest-cni-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:59.080603    5750 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:26:59.082997    5750 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 10:26:59.083020    5750 cni.go:84] Creating CNI manager for ""
	I0731 10:26:59.083027    5750 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 10:26:59.083059    5750 start.go:340] cluster config:
	{Name:newest-cni-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-766000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 10:26:59.086443    5750 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:26:59.093580    5750 out.go:177] * Starting "newest-cni-766000" primary control-plane node in "newest-cni-766000" cluster
	I0731 10:26:59.097429    5750 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 10:26:59.097443    5750 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 10:26:59.097452    5750 cache.go:56] Caching tarball of preloaded images
	I0731 10:26:59.097504    5750 preload.go:172] Found /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 10:26:59.097509    5750 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 10:26:59.097554    5750 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/newest-cni-766000/config.json ...
	I0731 10:26:59.097970    5750 start.go:360] acquireMachinesLock for newest-cni-766000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:26:59.098008    5750 start.go:364] duration metric: took 31.166µs to acquireMachinesLock for "newest-cni-766000"
	I0731 10:26:59.098017    5750 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:26:59.098022    5750 fix.go:54] fixHost starting: 
	I0731 10:26:59.098133    5750 fix.go:112] recreateIfNeeded on newest-cni-766000: state=Stopped err=<nil>
	W0731 10:26:59.098141    5750 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:26:59.102502    5750 out.go:177] * Restarting existing qemu2 VM for "newest-cni-766000" ...
	I0731 10:26:59.110498    5750 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:26:59.110531    5750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:22:f8:c1:46:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:26:59.112400    5750 main.go:141] libmachine: STDOUT: 
	I0731 10:26:59.112417    5750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:26:59.112444    5750 fix.go:56] duration metric: took 14.423166ms for fixHost
	I0731 10:26:59.112449    5750 start.go:83] releasing machines lock for "newest-cni-766000", held for 14.436375ms
	W0731 10:26:59.112455    5750 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:26:59.112491    5750 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:26:59.112496    5750 start.go:729] Will try again in 5 seconds ...
	I0731 10:27:04.114620    5750 start.go:360] acquireMachinesLock for newest-cni-766000: {Name:mke63768d96156f861d5311c6576feb2759dfdfe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 10:27:04.115160    5750 start.go:364] duration metric: took 406.083µs to acquireMachinesLock for "newest-cni-766000"
	I0731 10:27:04.115297    5750 start.go:96] Skipping create...Using existing machine configuration
	I0731 10:27:04.115322    5750 fix.go:54] fixHost starting: 
	I0731 10:27:04.116112    5750 fix.go:112] recreateIfNeeded on newest-cni-766000: state=Stopped err=<nil>
	W0731 10:27:04.116145    5750 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 10:27:04.125849    5750 out.go:177] * Restarting existing qemu2 VM for "newest-cni-766000" ...
	I0731 10:27:04.129794    5750 qemu.go:418] Using hvf for hardware acceleration
	I0731 10:27:04.130059    5750 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:22:f8:c1:46:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19349-1049/.minikube/machines/newest-cni-766000/disk.qcow2
	I0731 10:27:04.140171    5750 main.go:141] libmachine: STDOUT: 
	I0731 10:27:04.140237    5750 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 10:27:04.140305    5750 fix.go:56] duration metric: took 24.992ms for fixHost
	I0731 10:27:04.140321    5750 start.go:83] releasing machines lock for "newest-cni-766000", held for 25.139791ms
	W0731 10:27:04.140471    5750 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-766000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 10:27:04.147762    5750 out.go:177] 
	W0731 10:27:04.151840    5750 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 10:27:04.151863    5750 out.go:239] * 
	* 
	W0731 10:27:04.154289    5750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 10:27:04.160814    5750 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-766000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000: exit status 7 (69.268958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-766000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000: exit status 7 (29.873416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-766000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-766000 --alsologtostderr -v=1: exit status 83 (40.637125ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-766000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-766000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 10:27:04.345712    5764 out.go:291] Setting OutFile to fd 1 ...
	I0731 10:27:04.345861    5764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:27:04.345864    5764 out.go:304] Setting ErrFile to fd 2...
	I0731 10:27:04.345866    5764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 10:27:04.345997    5764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 10:27:04.346197    5764 out.go:298] Setting JSON to false
	I0731 10:27:04.346203    5764 mustload.go:65] Loading cluster: newest-cni-766000
	I0731 10:27:04.346408    5764 config.go:182] Loaded profile config "newest-cni-766000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 10:27:04.350528    5764 out.go:177] * The control-plane node newest-cni-766000 host is not running: state=Stopped
	I0731 10:27:04.354455    5764 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-766000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-766000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000: exit status 7 (28.829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-766000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000: exit status 7 (29.044375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-766000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 6.05
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.18
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.31
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 153.21
38 TestAddons/serial/Volcano 38.94
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.35
43 TestAddons/parallel/Ingress 19.75
44 TestAddons/parallel/InspektorGadget 10.22
45 TestAddons/parallel/MetricsServer 5.25
48 TestAddons/parallel/CSI 42.82
49 TestAddons/parallel/Headlamp 15.51
50 TestAddons/parallel/CloudSpanner 5.24
51 TestAddons/parallel/LocalPath 9.52
52 TestAddons/parallel/NvidiaDevicePlugin 5.15
53 TestAddons/parallel/Yakd 10.2
54 TestAddons/StoppedEnableDisable 12.37
62 TestHyperKitDriverInstallOrUpdate 11.21
65 TestErrorSpam/setup 36.61
66 TestErrorSpam/start 0.35
67 TestErrorSpam/status 0.25
68 TestErrorSpam/pause 0.62
69 TestErrorSpam/unpause 0.57
70 TestErrorSpam/stop 55.27
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 50.1
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.46
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.51
82 TestFunctional/serial/CacheCmd/cache/add_local 1.1
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.98
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 35.66
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.67
94 TestFunctional/serial/InvalidService 3.71
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 8.62
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.12
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.16
106 TestFunctional/parallel/PersistentVolumeClaim 24.85
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.4
111 TestFunctional/parallel/FileSync 0.1
112 TestFunctional/parallel/CertSync 0.39
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
120 TestFunctional/parallel/License 0.31
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.87
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
136 TestFunctional/parallel/ServiceCmd/Format 0.1
137 TestFunctional/parallel/ServiceCmd/URL 0.09
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
139 TestFunctional/parallel/ProfileCmd/profile_list 0.12
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
141 TestFunctional/parallel/MountCmd/any-port 5.11
142 TestFunctional/parallel/MountCmd/specific-port 0.74
143 TestFunctional/parallel/MountCmd/VerifyCleanup 0.84
144 TestFunctional/parallel/Version/short 0.04
145 TestFunctional/parallel/Version/components 0.17
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
150 TestFunctional/parallel/ImageCommands/ImageBuild 1.63
151 TestFunctional/parallel/ImageCommands/Setup 1.66
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.44
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.58
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
155 TestFunctional/parallel/DockerEnv/bash 0.29
156 TestFunctional/parallel/UpdateContextCmd/no_changes 1.19
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
159 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
160 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
161 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
162 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 204.97
170 TestMultiControlPlane/serial/DeployApp 3.68
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 84.17
173 TestMultiControlPlane/serial/NodeLabels 0.12
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
175 TestMultiControlPlane/serial/CopyFile 4.36
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 28.96
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 1.87
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.19
221 TestMainNoArgs 0.03
266 TestStoppedBinaryUpgrade/Setup 0.86
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.67
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
284 TestNoKubernetes/serial/ProfileList 15.76
285 TestNoKubernetes/serial/Stop 3.43
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
303 TestStartStop/group/old-k8s-version/serial/Stop 3.39
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/no-preload/serial/Stop 3.45
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
325 TestStartStop/group/embed-certs/serial/Stop 2
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.96
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
345 TestStartStop/group/newest-cni/serial/Stop 3.24
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-914000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-914000: exit status 85 (92.12125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-914000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT |          |
	|         | -p download-only-914000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 09:39:05
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 09:39:05.145382    1554 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:39:05.145520    1554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:05.145524    1554 out.go:304] Setting ErrFile to fd 2...
	I0731 09:39:05.145526    1554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:05.145660    1554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	W0731 09:39:05.145747    1554 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19349-1049/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19349-1049/.minikube/config/config.json: no such file or directory
	I0731 09:39:05.147055    1554 out.go:298] Setting JSON to true
	I0731 09:39:05.164163    1554 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":509,"bootTime":1722443436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 09:39:05.164223    1554 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 09:39:05.169189    1554 out.go:97] [download-only-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 09:39:05.169350    1554 notify.go:220] Checking for updates...
	W0731 09:39:05.169384    1554 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 09:39:05.170896    1554 out.go:169] MINIKUBE_LOCATION=19349
	I0731 09:39:05.173949    1554 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 09:39:05.178027    1554 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 09:39:05.179455    1554 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 09:39:05.182980    1554 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	W0731 09:39:05.189041    1554 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 09:39:05.189299    1554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 09:39:05.194031    1554 out.go:97] Using the qemu2 driver based on user configuration
	I0731 09:39:05.194050    1554 start.go:297] selected driver: qemu2
	I0731 09:39:05.194065    1554 start.go:901] validating driver "qemu2" against <nil>
	I0731 09:39:05.194146    1554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 09:39:05.196957    1554 out.go:169] Automatically selected the socket_vmnet network
	I0731 09:39:05.202734    1554 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 09:39:05.202822    1554 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 09:39:05.202876    1554 cni.go:84] Creating CNI manager for ""
	I0731 09:39:05.202893    1554 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 09:39:05.202957    1554 start.go:340] cluster config:
	{Name:download-only-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:39:05.208432    1554 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 09:39:05.212974    1554 out.go:97] Downloading VM boot image ...
	I0731 09:39:05.212992    1554 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 09:39:09.736939    1554 out.go:97] Starting "download-only-914000" primary control-plane node in "download-only-914000" cluster
	I0731 09:39:09.736959    1554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 09:39:09.794112    1554 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 09:39:09.794121    1554 cache.go:56] Caching tarball of preloaded images
	I0731 09:39:09.794277    1554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 09:39:09.798820    1554 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 09:39:09.798827    1554 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:09.875268    1554 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 09:39:15.412431    1554 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:15.412580    1554 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:16.107363    1554 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 09:39:16.107626    1554 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-914000/config.json ...
	I0731 09:39:16.107646    1554 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-914000/config.json: {Name:mk5edff6f9ab0ab06bd4a3be203539e8b669aa05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 09:39:16.107897    1554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 09:39:16.108107    1554 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 09:39:16.460037    1554 out.go:169] 
	W0731 09:39:16.467366    1554 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60 0x108be5a60] Decompressors:map[bz2:0x1400066b310 gz:0x1400066b318 tar:0x1400066b2c0 tar.bz2:0x1400066b2d0 tar.gz:0x1400066b2e0 tar.xz:0x1400066b2f0 tar.zst:0x1400066b300 tbz2:0x1400066b2d0 tgz:0x1400066b2e0 txz:0x1400066b2f0 tzst:0x1400066b300 xz:0x1400066b320 zip:0x1400066b330 zst:0x1400066b328] Getters:map[file:0x1400134c550 http:0x140004763c0 https:0x14000476410] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 09:39:16.467392    1554 out_reason.go:110] 
	W0731 09:39:16.474195    1554 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 09:39:16.478169    1554 out.go:169] 
	
	
	* The control-plane node download-only-914000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-914000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-914000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-069000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-069000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (6.054702542s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-069000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-069000: exit status 85 (75.6855ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-914000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT |                     |
	|         | -p download-only-914000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT | 31 Jul 24 09:39 PDT |
	| delete  | -p download-only-914000        | download-only-914000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT | 31 Jul 24 09:39 PDT |
	| start   | -o=json --download-only        | download-only-069000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT |                     |
	|         | -p download-only-069000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 09:39:16
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 09:39:16.880156    1578 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:39:16.880279    1578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:16.880283    1578 out.go:304] Setting ErrFile to fd 2...
	I0731 09:39:16.880285    1578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:16.880410    1578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:39:16.881481    1578 out.go:298] Setting JSON to true
	I0731 09:39:16.897393    1578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":520,"bootTime":1722443436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 09:39:16.897453    1578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 09:39:16.902245    1578 out.go:97] [download-only-069000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 09:39:16.902371    1578 notify.go:220] Checking for updates...
	I0731 09:39:16.906346    1578 out.go:169] MINIKUBE_LOCATION=19349
	I0731 09:39:16.909398    1578 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 09:39:16.913384    1578 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 09:39:16.916349    1578 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 09:39:16.919404    1578 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	W0731 09:39:16.925330    1578 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 09:39:16.925467    1578 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 09:39:16.928300    1578 out.go:97] Using the qemu2 driver based on user configuration
	I0731 09:39:16.928309    1578 start.go:297] selected driver: qemu2
	I0731 09:39:16.928313    1578 start.go:901] validating driver "qemu2" against <nil>
	I0731 09:39:16.928358    1578 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 09:39:16.929843    1578 out.go:169] Automatically selected the socket_vmnet network
	I0731 09:39:16.935418    1578 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 09:39:16.935524    1578 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 09:39:16.935539    1578 cni.go:84] Creating CNI manager for ""
	I0731 09:39:16.935547    1578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 09:39:16.935552    1578 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 09:39:16.935583    1578 start.go:340] cluster config:
	{Name:download-only-069000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:39:16.938989    1578 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 09:39:16.942337    1578 out.go:97] Starting "download-only-069000" primary control-plane node in "download-only-069000" cluster
	I0731 09:39:16.942344    1578 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 09:39:17.009524    1578 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 09:39:17.009536    1578 cache.go:56] Caching tarball of preloaded images
	I0731 09:39:17.009730    1578 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 09:39:17.014800    1578 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 09:39:17.014808    1578 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:17.102036    1578 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 09:39:21.176761    1578 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:21.176905    1578 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:21.719441    1578 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 09:39:21.719633    1578 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-069000/config.json ...
	I0731 09:39:21.719648    1578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-069000/config.json: {Name:mk3c34c406704143b0541243af938a92fc3f4122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 09:39:21.719970    1578 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 09:39:21.720109    1578 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-069000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-069000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-069000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-875000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-875000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (6.179862125s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-875000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-875000: exit status 85 (80.638792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-914000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT |                     |
	|         | -p download-only-914000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT | 31 Jul 24 09:39 PDT |
	| delete  | -p download-only-914000             | download-only-914000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT | 31 Jul 24 09:39 PDT |
	| start   | -o=json --download-only             | download-only-069000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT |                     |
	|         | -p download-only-069000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT | 31 Jul 24 09:39 PDT |
	| delete  | -p download-only-069000             | download-only-069000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT | 31 Jul 24 09:39 PDT |
	| start   | -o=json --download-only             | download-only-875000 | jenkins | v1.33.1 | 31 Jul 24 09:39 PDT |                     |
	|         | -p download-only-875000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 09:39:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 09:39:23.220563    1600 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:39:23.220760    1600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:23.220763    1600 out.go:304] Setting ErrFile to fd 2...
	I0731 09:39:23.220766    1600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:39:23.220895    1600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:39:23.221977    1600 out.go:298] Setting JSON to true
	I0731 09:39:23.237873    1600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":527,"bootTime":1722443436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 09:39:23.237936    1600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 09:39:23.242171    1600 out.go:97] [download-only-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 09:39:23.242277    1600 notify.go:220] Checking for updates...
	I0731 09:39:23.246129    1600 out.go:169] MINIKUBE_LOCATION=19349
	I0731 09:39:23.250082    1600 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 09:39:23.254059    1600 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 09:39:23.257081    1600 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 09:39:23.260154    1600 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	W0731 09:39:23.266048    1600 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 09:39:23.266177    1600 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 09:39:23.269098    1600 out.go:97] Using the qemu2 driver based on user configuration
	I0731 09:39:23.269108    1600 start.go:297] selected driver: qemu2
	I0731 09:39:23.269112    1600 start.go:901] validating driver "qemu2" against <nil>
	I0731 09:39:23.269171    1600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 09:39:23.272117    1600 out.go:169] Automatically selected the socket_vmnet network
	I0731 09:39:23.277131    1600 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 09:39:23.277210    1600 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 09:39:23.277227    1600 cni.go:84] Creating CNI manager for ""
	I0731 09:39:23.277236    1600 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 09:39:23.277246    1600 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 09:39:23.277283    1600 start.go:340] cluster config:
	{Name:download-only-875000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:39:23.280584    1600 iso.go:125] acquiring lock: {Name:mk70425d2712e510c3bac553e23d0bc6c9373e7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 09:39:23.284157    1600 out.go:97] Starting "download-only-875000" primary control-plane node in "download-only-875000" cluster
	I0731 09:39:23.284166    1600 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 09:39:23.370660    1600 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 09:39:23.370670    1600 cache.go:56] Caching tarball of preloaded images
	I0731 09:39:23.370891    1600 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 09:39:23.375022    1600 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 09:39:23.375031    1600 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:23.450231    1600 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 09:39:27.535068    1600 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:27.535236    1600 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 09:39:28.053603    1600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 09:39:28.053788    1600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-875000/config.json ...
	I0731 09:39:28.053805    1600 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/download-only-875000/config.json: {Name:mka7d35f4499dec1e69d5584b29500a41ed0b119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 09:39:28.054049    1600 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 09:39:28.054175    1600 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19349-1049/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-875000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-875000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-875000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-226000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-226000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-226000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-412000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-412000: exit status 85 (55.626042ms)

                                                
                                                
-- stdout --
	* Profile "addons-412000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-412000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-412000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-412000: exit status 85 (59.23225ms)

                                                
                                                
-- stdout --
	* Profile "addons-412000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-412000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (153.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-412000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-412000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m33.211219791s)
--- PASS: TestAddons/Setup (153.21s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.94s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 6.610917ms
addons_test.go:897: volcano-scheduler stabilized in 6.758167ms
addons_test.go:905: volcano-admission stabilized in 6.8435ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-vt898" [7f15b9db-e144-4775-b648-867f35763442] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003601083s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-nmbrh" [675a859a-4059-4eba-9d98-4017ac726dbe] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003673917s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-lc5p8" [f2f760cb-55b9-437d-9590-df90a637bc93] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003587084s
addons_test.go:932: (dbg) Run:  kubectl --context addons-412000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-412000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-412000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ba652589-2615-49af-93ab-518723793ecc] Pending
helpers_test.go:344: "test-job-nginx-0" [ba652589-2615-49af-93ab-518723793ecc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ba652589-2615-49af-93ab-518723793ecc] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003778084s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-412000 addons disable volcano --alsologtostderr -v=1: (9.723865458s)
--- PASS: TestAddons/serial/Volcano (38.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-412000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-412000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.178417ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-4929h" [88381650-e232-400c-968d-c3dcea96ec18] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003089875s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zszgg" [630773df-0414-48c8-9bc1-d6687dac801f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003767875s
addons_test.go:342: (dbg) Run:  kubectl --context addons-412000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-412000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-412000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.050986333s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.35s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-412000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-412000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-412000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d120c3bf-e7c5-4d46-9a2a-0b72491fbbc5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d120c3bf-e7c5-4d46-9a2a-0b72491fbbc5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003528917s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-412000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-412000 addons disable ingress --alsologtostderr -v=1: (7.261888292s)
--- PASS: TestAddons/parallel/Ingress (19.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pgfz8" [4e65ffa8-9809-4dce-b70a-7883f6f494a5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004150708s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-412000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-412000: (5.210426792s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.429792ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-h2zkg" [40dad104-b125-43c5-9e63-3cc4439ace6e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003911625s
addons_test.go:417: (dbg) Run:  kubectl --context addons-412000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.891041ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-412000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-412000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4c05fcd7-5fbd-43a8-b887-a8e438e219c9] Pending
helpers_test.go:344: "task-pv-pod" [4c05fcd7-5fbd-43a8-b887-a8e438e219c9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4c05fcd7-5fbd-43a8-b887-a8e438e219c9] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003701333s
addons_test.go:590: (dbg) Run:  kubectl --context addons-412000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-412000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-412000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-412000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-412000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-412000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
2024/07/31 09:43:11 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-412000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [eba118a7-8e82-463f-b416-0c6e98fbe1a3] Pending
helpers_test.go:344: "task-pv-pod-restore" [eba118a7-8e82-463f-b416-0c6e98fbe1a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [eba118a7-8e82-463f-b416-0c6e98fbe1a3] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004092833s
addons_test.go:632: (dbg) Run:  kubectl --context addons-412000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-412000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-412000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-412000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.089602166s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-412000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-vdc8w" [8f7e0f17-6f50-4dfa-99eb-5c58468ee690] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-vdc8w" [8f7e0f17-6f50-4dfa-99eb-5c58468ee690] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003848s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-412000 addons disable headlamp --alsologtostderr -v=1: (5.1884055s)
--- PASS: TestAddons/parallel/Headlamp (15.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-bknvk" [702c8eb4-75e5-4205-acf6-b461656411c2] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00384075s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-412000
--- PASS: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-412000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-412000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b8b17546-160b-43e6-a17f-0174ac221a95] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b8b17546-160b-43e6-a17f-0174ac221a95] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b8b17546-160b-43e6-a17f-0174ac221a95] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003850333s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-412000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 ssh "cat /opt/local-path-provisioner/pvc-9449e03c-5b93-4c7c-8533-e4abc4360a10_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-412000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-412000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sp4gd" [f37663ea-eb83-4b61-a05c-c2c90974d671] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004361166s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-412000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-4m2gv" [53a5b3b0-9c4f-454b-b3a6-1d1c15e825a9] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00246575s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-412000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-412000 addons disable yakd --alsologtostderr -v=1: (5.192884875s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-412000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-412000: (12.184539875s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-412000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-412000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-412000
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.21s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.21s)

                                                
                                    
x
+
TestErrorSpam/setup (36.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-157000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-157000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 --driver=qemu2 : (36.611643167s)
--- PASS: TestErrorSpam/setup (36.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 pause
--- PASS: TestErrorSpam/pause (0.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 unpause
--- PASS: TestErrorSpam/unpause (0.57s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 stop: (3.171328042s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 stop: (26.06223625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-157000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-157000 stop: (26.030165917s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19349-1049/.minikube/files/etc/test/nested/copy/1552/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-966000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-966000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (50.095303834s)
--- PASS: TestFunctional/serial/StartWithProxy (50.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-966000 --alsologtostderr -v=8
E0731 09:47:03.546067    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:03.552946    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:03.565006    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:03.587080    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:03.627146    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:03.708550    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:03.870662    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:04.192763    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:04.834989    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:06.117152    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:08.679277    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:13.801364    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:47:24.042909    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-966000 --alsologtostderr -v=8: (37.455827333s)
functional_test.go:659: soft start took 37.456234916s for "functional-966000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-966000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local20329255/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cache add minikube-local-cache-test:functional-966000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cache delete minikube-local-cache-test:functional-966000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-966000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.477167ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 kubectl -- --context functional-966000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.98s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-966000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-966000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 09:47:44.524843    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-966000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.659951834s)
functional_test.go:757: restart took 35.660073292s for "functional-966000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-966000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1408248058/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-966000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-966000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-966000: exit status 115 (101.887583ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32419 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-966000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 config get cpus: exit status 14 (35.298ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 config get cpus: exit status 14 (30.796791ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-966000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-966000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2217: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-966000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-966000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.722333ms)

                                                
                                                
-- stdout --
	* [functional-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 09:48:53.529834    2204 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:48:53.529992    2204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:48:53.529996    2204 out.go:304] Setting ErrFile to fd 2...
	I0731 09:48:53.529998    2204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:48:53.530123    2204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:48:53.531188    2204 out.go:298] Setting JSON to false
	I0731 09:48:53.547814    2204 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1097,"bootTime":1722443436,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 09:48:53.547896    2204 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 09:48:53.552286    2204 out.go:177] * [functional-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 09:48:53.555326    2204 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 09:48:53.555332    2204 notify.go:220] Checking for updates...
	I0731 09:48:53.562268    2204 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 09:48:53.565290    2204 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 09:48:53.568207    2204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 09:48:53.571280    2204 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 09:48:53.574288    2204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 09:48:53.577506    2204 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 09:48:53.577753    2204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 09:48:53.582230    2204 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 09:48:53.589273    2204 start.go:297] selected driver: qemu2
	I0731 09:48:53.589279    2204 start.go:901] validating driver "qemu2" against &{Name:functional-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:48:53.589329    2204 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 09:48:53.594191    2204 out.go:177] 
	W0731 09:48:53.598211    2204 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 09:48:53.601292    2204 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-966000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-966000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-966000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.151625ms)

                                                
                                                
-- stdout --
	* [functional-966000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 09:48:53.407209    2200 out.go:291] Setting OutFile to fd 1 ...
	I0731 09:48:53.407321    2200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:48:53.407324    2200 out.go:304] Setting ErrFile to fd 2...
	I0731 09:48:53.407330    2200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 09:48:53.407462    2200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
	I0731 09:48:53.408833    2200 out.go:298] Setting JSON to false
	I0731 09:48:53.426200    2200 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1097,"bootTime":1722443436,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 09:48:53.426293    2200 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 09:48:53.431259    2200 out.go:177] * [functional-966000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0731 09:48:53.439295    2200 out.go:177]   - MINIKUBE_LOCATION=19349
	I0731 09:48:53.439323    2200 notify.go:220] Checking for updates...
	I0731 09:48:53.447289    2200 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	I0731 09:48:53.451223    2200 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 09:48:53.454224    2200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 09:48:53.457245    2200 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	I0731 09:48:53.460133    2200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 09:48:53.463541    2200 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 09:48:53.463801    2200 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 09:48:53.468275    2200 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0731 09:48:53.475238    2200 start.go:297] selected driver: qemu2
	I0731 09:48:53.475244    2200 start.go:901] validating driver "qemu2" against &{Name:functional-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 09:48:53.475290    2200 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 09:48:53.481205    2200 out.go:177] 
	W0731 09:48:53.485240    2200 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 09:48:53.492626    2200 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 addons list
E0731 09:48:25.484539    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7903e59e-7ca6-4b14-9748-48e236006bda] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003688166s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-966000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-966000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-966000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-966000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b963fb10-ad72-4689-950d-b52eb223ae4c] Pending
helpers_test.go:344: "sp-pod" [b963fb10-ad72-4689-950d-b52eb223ae4c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b963fb10-ad72-4689-950d-b52eb223ae4c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003844166s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-966000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-966000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-966000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [08b3de9b-8071-4901-aae7-4240f128a03b] Pending
helpers_test.go:344: "sp-pod" [08b3de9b-8071-4901-aae7-4240f128a03b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [08b3de9b-8071-4901-aae7-4240f128a03b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003881208s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-966000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh -n functional-966000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cp functional-966000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1572578249/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh -n functional-966000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh -n functional-966000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1552/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo cat /etc/test/nested/copy/1552/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1552.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo cat /etc/ssl/certs/1552.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1552.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo cat /usr/share/ca-certificates/1552.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo cat /etc/ssl/certs/15522.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo cat /usr/share/ca-certificates/15522.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-966000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 ssh "sudo systemctl is-active crio": exit status 1 (59.445666ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-966000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-966000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-966000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2063: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-966000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-966000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-966000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1731b006-0571-40fd-b498-e052085b844e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1731b006-0571-40fd-b498-e052085b844e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00377475s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-966000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.247.248 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-966000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-966000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-966000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-ncd66" [7b7beae5-78d8-4b01-98d3-5feec89aaea9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-ncd66" [7b7beae5-78d8-4b01-98d3-5feec89aaea9] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004343583s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 service list -o json
functional_test.go:1490: Took "275.195125ms" to run "out/minikube-darwin-arm64 -p functional-966000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30721
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30721
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "81.863083ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.573625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "83.577333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.46275ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3956944043/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722444526450555000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3956944043/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722444526450555000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3956944043/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722444526450555000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3956944043/001/test-1722444526450555000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (56.384583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 16:48 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 16:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 16:48 test-1722444526450555000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh cat /mount-9p/test-1722444526450555000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-966000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6ee051a6-a7b9-4016-8a3a-89808d8224cd] Pending
helpers_test.go:344: "busybox-mount" [6ee051a6-a7b9-4016-8a3a-89808d8224cd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6ee051a6-a7b9-4016-8a3a-89808d8224cd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6ee051a6-a7b9-4016-8a3a-89808d8224cd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003878542s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-966000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3956944043/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port198263675/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.761583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port198263675/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 ssh "sudo umount -f /mount-9p": exit status 1 (58.208417ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-966000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port198263675/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T" /mount1: exit status 1 (73.639166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-966000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-966000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2467622045/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-966000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-966000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-966000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-966000 image ls --format short --alsologtostderr:
I0731 09:49:04.259981    2367 out.go:291] Setting OutFile to fd 1 ...
I0731 09:49:04.260112    2367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.260115    2367 out.go:304] Setting ErrFile to fd 2...
I0731 09:49:04.260118    2367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.260266    2367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
I0731 09:49:04.260696    2367 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.260757    2367 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.261558    2367 ssh_runner.go:195] Run: systemctl --version
I0731 09:49:04.261565    2367 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/functional-966000/id_rsa Username:docker}
I0731 09:49:04.285073    2367 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-966000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-966000 | 892217e3e8739 | 30B    |
| docker.io/kicbase/echo-server               | functional-966000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-966000 image ls --format table --alsologtostderr:
I0731 09:49:04.653387    2378 out.go:291] Setting OutFile to fd 1 ...
I0731 09:49:04.653547    2378 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.653551    2378 out.go:304] Setting ErrFile to fd 2...
I0731 09:49:04.653554    2378 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.653695    2378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
I0731 09:49:04.654129    2378 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.654193    2378 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.654998    2378 ssh_runner.go:195] Run: systemctl --version
I0731 09:49:04.655005    2378 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/functional-966000/id_rsa Username:docker}
I0731 09:49:04.680690    2378 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-966000 image ls --format json --alsologtostderr:
[{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"61773190d42ff0792f3bab2658e80
b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"rep
oTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-966000"],"size":"4780000"},{"id":"892217e3e873913d20f23c7b9aae4c5c497e265be42aec16a3aea895eb701891","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-966000"],"size":"30"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"s
ize":"87900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-966000 image ls --format json --alsologtostderr:
I0731 09:49:04.580185    2376 out.go:291] Setting OutFile to fd 1 ...
I0731 09:49:04.580335    2376 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.580339    2376 out.go:304] Setting ErrFile to fd 2...
I0731 09:49:04.580341    2376 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.580471    2376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
I0731 09:49:04.580925    2376 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.580982    2376 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.581793    2376 ssh_runner.go:195] Run: systemctl --version
I0731 09:49:04.581801    2376 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/functional-966000/id_rsa Username:docker}
I0731 09:49:04.605320    2376 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-966000 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-966000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 892217e3e873913d20f23c7b9aae4c5c497e265be42aec16a3aea895eb701891
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-966000
size: "30"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-966000 image ls --format yaml --alsologtostderr:
I0731 09:49:04.325113    2369 out.go:291] Setting OutFile to fd 1 ...
I0731 09:49:04.325258    2369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.325261    2369 out.go:304] Setting ErrFile to fd 2...
I0731 09:49:04.325263    2369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.325395    2369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
I0731 09:49:04.325819    2369 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.325880    2369 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.326670    2369 ssh_runner.go:195] Run: systemctl --version
I0731 09:49:04.326677    2369 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/functional-966000/id_rsa Username:docker}
I0731 09:49:04.350925    2369 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-966000 ssh pgrep buildkitd: exit status 1 (55.829333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image build -t localhost/my-image:functional-966000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-966000 image build -t localhost/my-image:functional-966000 testdata/build --alsologtostderr: (1.500399s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-966000 image build -t localhost/my-image:functional-966000 testdata/build --alsologtostderr:
I0731 09:49:04.448783    2374 out.go:291] Setting OutFile to fd 1 ...
I0731 09:49:04.448969    2374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.448972    2374 out.go:304] Setting ErrFile to fd 2...
I0731 09:49:04.448974    2374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 09:49:04.449107    2374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19349-1049/.minikube/bin
I0731 09:49:04.449529    2374 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.450280    2374 config.go:182] Loaded profile config "functional-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 09:49:04.451119    2374 ssh_runner.go:195] Run: systemctl --version
I0731 09:49:04.451126    2374 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19349-1049/.minikube/machines/functional-966000/id_rsa Username:docker}
I0731 09:49:04.474610    2374 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3740285110.tar
I0731 09:49:04.474668    2374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 09:49:04.478319    2374 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3740285110.tar
I0731 09:49:04.480072    2374 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3740285110.tar: stat -c "%s %y" /var/lib/minikube/build/build.3740285110.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3740285110.tar': No such file or directory
I0731 09:49:04.480090    2374 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3740285110.tar --> /var/lib/minikube/build/build.3740285110.tar (3072 bytes)
I0731 09:49:04.488791    2374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3740285110
I0731 09:49:04.492348    2374 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3740285110 -xf /var/lib/minikube/build/build.3740285110.tar
I0731 09:49:04.496080    2374 docker.go:360] Building image: /var/lib/minikube/build/build.3740285110
I0731 09:49:04.496129    2374 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-966000 /var/lib/minikube/build/build.3740285110
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:54fafeebdc2acee1ed76f21ed22db514e859f79180be1b0b2f28a13954c8a45c done
#8 naming to localhost/my-image:functional-966000 done
#8 DONE 0.0s
I0731 09:49:05.907357    2374 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-966000 /var/lib/minikube/build/build.3740285110: (1.411238084s)
I0731 09:49:05.907430    2374 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3740285110
I0731 09:49:05.911679    2374 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3740285110.tar
I0731 09:49:05.914836    2374 build_images.go:217] Built localhost/my-image:functional-966000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3740285110.tar
I0731 09:49:05.914852    2374 build_images.go:133] succeeded building to: functional-966000
I0731 09:49:05.914854    2374 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.64788625s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-966000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image load --daemon docker.io/kicbase/echo-server:functional-966000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image load --daemon docker.io/kicbase/echo-server:functional-966000 --alsologtostderr
2024/07/31 09:49:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-966000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image load --daemon docker.io/kicbase/echo-server:functional-966000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-966000 docker-env) && out/minikube-darwin-arm64 status -p functional-966000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-966000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-darwin-arm64 -p functional-966000 update-context --alsologtostderr -v=2: (1.190967458s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image save docker.io/kicbase/echo-server:functional-966000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image rm docker.io/kicbase/echo-server:functional-966000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-966000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-966000 image save --daemon docker.io/kicbase/echo-server:functional-966000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-966000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-966000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-966000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-966000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-217000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0731 09:49:47.405398    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
E0731 09:52:03.541020    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-217000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m24.779031209s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0731 09:52:31.244807    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/addons-412000/client.crt: no such file or directory
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-217000 -- rollout status deployment/busybox: (2.167898416s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-8cjxb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-kglps -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-vpjft -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-8cjxb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-kglps -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-vpjft -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-8cjxb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-kglps -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-vpjft -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-8cjxb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-8cjxb -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-kglps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-kglps -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-vpjft -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-217000 -- exec busybox-fc5497c4f-vpjft -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (84.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-217000 -v=7 --alsologtostderr
E0731 09:53:14.307015    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:14.313339    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:14.325397    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:14.346894    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:14.388970    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:14.471043    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:14.631315    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:14.953390    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:15.595567    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:16.877726    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:19.439830    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:24.561946    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:34.803174    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
E0731 09:53:55.284996    1552 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19349-1049/.minikube/profiles/functional-966000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-217000 -v=7 --alsologtostderr: (1m23.938776708s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (84.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-217000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp testdata/cp-test.txt ha-217000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3717630551/001/cp-test_ha-217000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000:/home/docker/cp-test.txt ha-217000-m02:/home/docker/cp-test_ha-217000_ha-217000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test_ha-217000_ha-217000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000:/home/docker/cp-test.txt ha-217000-m03:/home/docker/cp-test_ha-217000_ha-217000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test_ha-217000_ha-217000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000:/home/docker/cp-test.txt ha-217000-m04:/home/docker/cp-test_ha-217000_ha-217000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test_ha-217000_ha-217000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp testdata/cp-test.txt ha-217000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3717630551/001/cp-test_ha-217000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m02:/home/docker/cp-test.txt ha-217000:/home/docker/cp-test_ha-217000-m02_ha-217000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test_ha-217000-m02_ha-217000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m02:/home/docker/cp-test.txt ha-217000-m03:/home/docker/cp-test_ha-217000-m02_ha-217000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test_ha-217000-m02_ha-217000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m02:/home/docker/cp-test.txt ha-217000-m04:/home/docker/cp-test_ha-217000-m02_ha-217000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test_ha-217000-m02_ha-217000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp testdata/cp-test.txt ha-217000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3717630551/001/cp-test_ha-217000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m03:/home/docker/cp-test.txt ha-217000:/home/docker/cp-test_ha-217000-m03_ha-217000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test_ha-217000-m03_ha-217000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m03:/home/docker/cp-test.txt ha-217000-m02:/home/docker/cp-test_ha-217000-m03_ha-217000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test_ha-217000-m03_ha-217000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m03:/home/docker/cp-test.txt ha-217000-m04:/home/docker/cp-test_ha-217000-m03_ha-217000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test_ha-217000-m03_ha-217000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp testdata/cp-test.txt ha-217000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3717630551/001/cp-test_ha-217000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m04:/home/docker/cp-test.txt ha-217000:/home/docker/cp-test_ha-217000-m04_ha-217000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000 "sudo cat /home/docker/cp-test_ha-217000-m04_ha-217000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m04:/home/docker/cp-test.txt ha-217000-m02:/home/docker/cp-test_ha-217000-m04_ha-217000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m02 "sudo cat /home/docker/cp-test_ha-217000-m04_ha-217000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 cp ha-217000-m04:/home/docker/cp-test.txt ha-217000-m03:/home/docker/cp-test_ha-217000-m04_ha-217000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-217000 ssh -n ha-217000-m03 "sudo cat /home/docker/cp-test_ha-217000-m04_ha-217000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (28.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (28.963000167s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (28.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-565000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-565000 --output=json --user=testUser: (1.870337417s)
--- PASS: TestJSONOutput/stop/Command (1.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-238000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-238000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.765584ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4355ebb-e06e-45c3-b28f-6fb97f272b77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-238000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bf5214c-2f5b-4bba-b96b-93b6b22f3c86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19349"}}
	{"specversion":"1.0","id":"2f50d58a-3879-4259-880d-c5e0cc910bb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig"}}
	{"specversion":"1.0","id":"726db0e4-ee8c-4f19-81d6-065bc1ab6f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"cc0cd7e0-c65a-43ca-b787-6287e6eee80f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7042fa9-2521-4b0a-a91e-5a40d231bcd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube"}}
	{"specversion":"1.0","id":"0a6abd41-a738-440c-8d2c-a7b6d2d0cd2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"831af629-7e8f-46cd-b42f-5c149485f479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-238000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-238000
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-608000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-330000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.785417ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-330000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19349
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19349-1049/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19349-1049/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-330000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-330000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.594708ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-330000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-330000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.665167792s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-330000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-330000: (3.428041125s)
--- PASS: TestNoKubernetes/serial/Stop (3.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-330000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-330000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (38.887166ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-330000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-330000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-537000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-537000 --alsologtostderr -v=3: (3.387033708s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-537000 -n old-k8s-version-537000: exit status 7 (56.173209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-537000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-364000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-364000 --alsologtostderr -v=3: (3.453905583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-364000 -n no-preload-364000: exit status 7 (58.126542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-364000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-134000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-134000 --alsologtostderr -v=3: (1.998574709s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-134000 -n embed-certs-134000: exit status 7 (58.304916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-134000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-833000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-833000 --alsologtostderr -v=3: (1.955568958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-833000 -n default-k8s-diff-port-833000: exit status 7 (55.359958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-833000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-766000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-766000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-766000 --alsologtostderr -v=3: (3.238090792s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-766000 -n newest-cni-766000: exit status 7 (53.59525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-766000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-336000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-336000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-336000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-336000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-336000"

                                                
                                                
----------------------- debugLogs end: cilium-336000 [took: 2.178341875s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-336000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-336000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-757000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-757000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard